* [PATCH 0/5] A handful of KVM/ARM fixes
@ 2013-06-19 13:20 Marc Zyngier
2013-06-19 13:20 ` [PATCH 1/5] ARM: KVM: perform save/restore of PAR Marc Zyngier
` (5 more replies)
0 siblings, 6 replies; 24+ messages in thread
From: Marc Zyngier @ 2013-06-19 13:20 UTC (permalink / raw)
To: linux-arm-kernel
This is a small patch series that I accumulated while tracking what
turned out to be an unrelated issue.
The first patch has already been posted a while ago and included here
for completeness. The others are fairly simple missing barriers/clear
exclusives that are actually required.
This has been tested on TC2, and is also available in my tree:
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm/fixes-3.10-rc6
Marc Zyngier (5):
ARM: KVM: perform save/restore of PAR
ARM: KVM: add missing dsb before invalidating Stage-2 TLBs
ARM: KVM: make sure maintainance operation complete before world
switch
ARM: KVM: clear exclusive monitor on all exception returns
ARM: KVM: issue a DSB after cache maintainance operations
arch/arm/include/asm/kvm_asm.h | 22 ++++++++++++----------
arch/arm/kvm/coproc.c | 6 ++++++
arch/arm/kvm/interrupts.S | 25 ++++++++++++++++++++++++-
arch/arm/kvm/interrupts_head.S | 10 ++++++++--
4 files changed, 50 insertions(+), 13 deletions(-)
--
1.8.2.3
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 1/5] ARM: KVM: perform save/restore of PAR
2013-06-19 13:20 [PATCH 0/5] A handful of KVM/ARM fixes Marc Zyngier
@ 2013-06-19 13:20 ` Marc Zyngier
2013-06-19 13:20 ` [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs Marc Zyngier
` (4 subsequent siblings)
5 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2013-06-19 13:20 UTC (permalink / raw)
To: linux-arm-kernel
Not saving PAR is an unfortunate oversight. If the guest performs
an AT* operation and gets scheduled out before reading the result
of the translation from PAR, it could become corrupted by another
guest or the host.
Saving this register is made slightly more complicated as KVM also
uses it on the permission fault handling path, leading to an ugly
"stash and restore" sequence. Fortunately, this is already a slow
path so we don't really care. Also, Linux doesn't do any AT*
operation, so Linux guests are not impacted by this bug.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 22 ++++++++++++----------
arch/arm/kvm/coproc.c | 4 ++++
arch/arm/kvm/interrupts.S | 12 +++++++++++-
arch/arm/kvm/interrupts_head.S | 10 ++++++++--
4 files changed, 35 insertions(+), 13 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 18d5032..4bb08e3 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -37,16 +37,18 @@
#define c5_AIFSR 15 /* Auxilary Instrunction Fault Status R */
#define c6_DFAR 16 /* Data Fault Address Register */
#define c6_IFAR 17 /* Instruction Fault Address Register */
-#define c9_L2CTLR 18 /* Cortex A15 L2 Control Register */
-#define c10_PRRR 19 /* Primary Region Remap Register */
-#define c10_NMRR 20 /* Normal Memory Remap Register */
-#define c12_VBAR 21 /* Vector Base Address Register */
-#define c13_CID 22 /* Context ID Register */
-#define c13_TID_URW 23 /* Thread ID, User R/W */
-#define c13_TID_URO 24 /* Thread ID, User R/O */
-#define c13_TID_PRIV 25 /* Thread ID, Privileged */
-#define c14_CNTKCTL 26 /* Timer Control Register (PL1) */
-#define NR_CP15_REGS 27 /* Number of regs (incl. invalid) */
+#define c7_PAR 18 /* Physical Address Register */
+#define c7_PAR_high 19 /* PAR top 32 bits */
+#define c9_L2CTLR 20 /* Cortex A15 L2 Control Register */
+#define c10_PRRR 21 /* Primary Region Remap Register */
+#define c10_NMRR 22 /* Normal Memory Remap Register */
+#define c12_VBAR 23 /* Vector Base Address Register */
+#define c13_CID 24 /* Context ID Register */
+#define c13_TID_URW 25 /* Thread ID, User R/W */
+#define c13_TID_URO 26 /* Thread ID, User R/O */
+#define c13_TID_PRIV 27 /* Thread ID, Privileged */
+#define c14_CNTKCTL 28 /* Timer Control Register (PL1) */
+#define NR_CP15_REGS 29 /* Number of regs (incl. invalid) */
#define ARM_EXCEPTION_RESET 0
#define ARM_EXCEPTION_UNDEFINED 1
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 8eea97b..4a51990 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -180,6 +180,10 @@ static const struct coproc_reg cp15_regs[] = {
NULL, reset_unknown, c6_DFAR },
{ CRn( 6), CRm( 0), Op1( 0), Op2( 2), is32,
NULL, reset_unknown, c6_IFAR },
+
+ /* PAR swapped by interrupt.S */
+ { CRn( 7), Op1( 0), is64, NULL, reset_unknown64, c7_PAR },
+
/*
* DC{C,I,CI}SW operations:
*/
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index f7793df..d0a8fa3 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -414,6 +414,10 @@ guest_trap:
mrcne p15, 4, r2, c6, c0, 4 @ HPFAR
bne 3f
+ /* Preserve PAR */
+ mrrc p15, 0, r0, r1, c7 @ PAR
+ push {r0, r1}
+
/* Resolve IPA using the xFAR */
mcr p15, 0, r2, c7, c8, 0 @ ATS1CPR
isb
@@ -424,13 +428,19 @@ guest_trap:
lsl r2, r2, #4
orr r2, r2, r1, lsl #24
+ /* Restore PAR */
+ pop {r0, r1}
+ mcrr p15, 0, r0, r1, c7 @ PAR
+
3: load_vcpu @ Load VCPU pointer to r0
str r2, [r0, #VCPU_HPFAR]
1: mov r1, #ARM_EXCEPTION_HVC
b __kvm_vcpu_return
-4: pop {r0, r1, r2} @ Failed translation, return to guest
+4: pop {r0, r1} @ Failed translation, return to guest
+ mcrr p15, 0, r0, r1, c7 @ PAR
+ pop {r0, r1, r2}
eret
/*
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 3c8f2f0..2478af1 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -302,11 +302,14 @@ vcpu .req r0 @ vcpu pointer always in r0
.endif
mrc p15, 0, r2, c14, c1, 0 @ CNTKCTL
+ mrrc p15, 0, r3, r4, c7 @ PAR
.if \store_to_vcpu == 0
- push {r2}
+ push {r2-r4}
.else
str r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
+ add r12, vcpu, #CP15_OFFSET(c7_PAR)
+ strd r3, r4, [r12]
.endif
.endm
@@ -319,12 +322,15 @@ vcpu .req r0 @ vcpu pointer always in r0
*/
.macro write_cp15_state read_from_vcpu
.if \read_from_vcpu == 0
- pop {r2}
+ pop {r2-r4}
.else
ldr r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
+ add r12, vcpu, #CP15_OFFSET(c7_PAR)
+ ldrd r3, r4, [r12]
.endif
mcr p15, 0, r2, c14, c1, 0 @ CNTKCTL
+ mcrr p15, 0, r3, r4, c7 @ PAR
.if \read_from_vcpu == 0
pop {r2-r12}
--
1.8.2.3
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs
2013-06-19 13:20 [PATCH 0/5] A handful of KVM/ARM fixes Marc Zyngier
2013-06-19 13:20 ` [PATCH 1/5] ARM: KVM: perform save/restore of PAR Marc Zyngier
@ 2013-06-19 13:20 ` Marc Zyngier
2013-06-20 0:05 ` Christoffer Dall
2013-06-20 10:47 ` Will Deacon
2013-06-19 13:20 ` [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch Marc Zyngier
` (3 subsequent siblings)
5 siblings, 2 replies; 24+ messages in thread
From: Marc Zyngier @ 2013-06-19 13:20 UTC (permalink / raw)
To: linux-arm-kernel
When performing a Stage-2 TLB invalidation, it is necessary to
make sure the write to the page tables is observable by all CPUs.
For this purpose, add a dsb instruction to __kvm_tlb_flush_vmid_ipa
before doing the TLB invalidation itself.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/interrupts.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index d0a8fa3..afa6c04 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -49,6 +49,7 @@ __kvm_hyp_code_start:
ENTRY(__kvm_tlb_flush_vmid_ipa)
push {r2, r3}
+ dsb
add r0, r0, #KVM_VTTBR
ldrd r2, r3, [r0]
mcrr p15, 6, r2, r3, c2 @ Write VTTBR
--
1.8.2.3
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-19 13:20 [PATCH 0/5] A handful of KVM/ARM fixes Marc Zyngier
2013-06-19 13:20 ` [PATCH 1/5] ARM: KVM: perform save/restore of PAR Marc Zyngier
2013-06-19 13:20 ` [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs Marc Zyngier
@ 2013-06-19 13:20 ` Marc Zyngier
2013-06-20 0:18 ` Christoffer Dall
2013-06-20 10:48 ` Will Deacon
2013-06-19 13:20 ` [PATCH 4/5] ARM: KVM: clear exclusive monitor on all exception returns Marc Zyngier
` (2 subsequent siblings)
5 siblings, 2 replies; 24+ messages in thread
From: Marc Zyngier @ 2013-06-19 13:20 UTC (permalink / raw)
To: linux-arm-kernel
We may have preempted the guest while it was performing a maintainance
operation (TLB invalidation, for example). Make sure it completes
before we do anything else by adding the necessary barriers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/interrupts.S | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index afa6c04..3124e0f 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -149,6 +149,15 @@ __kvm_vcpu_return:
* r0: vcpu pointer
* r1: exception code
*/
+
+ /*
+ * We may have preempted the guest while it was performing a
+ * maintainance operation (TLB invalidation, for example). Make
+ * sure it completes before we do anything else.
+ */
+ dsb
+ isb
+
save_guest_regs
@ Set VMID == 0
--
1.8.2.3
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 4/5] ARM: KVM: clear exclusive monitor on all exception returns
2013-06-19 13:20 [PATCH 0/5] A handful of KVM/ARM fixes Marc Zyngier
` (2 preceding siblings ...)
2013-06-19 13:20 ` [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch Marc Zyngier
@ 2013-06-19 13:20 ` Marc Zyngier
2013-06-20 0:27 ` Christoffer Dall
2013-06-19 13:20 ` [PATCH 5/5] ARM: KVM: issue a DSB after cache maintainance operations Marc Zyngier
2013-06-20 18:33 ` [PATCH 0/5] A handful of KVM/ARM fixes Christoffer Dall
5 siblings, 1 reply; 24+ messages in thread
From: Marc Zyngier @ 2013-06-19 13:20 UTC (permalink / raw)
To: linux-arm-kernel
Make sure we clear the exclusive movitor on all exception returns,
which otherwise could lead to lock corruptions.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/interrupts.S | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 3124e0f..750f051 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -301,6 +301,7 @@ THUMB( orr r2, r2, #PSR_T_BIT )
ldr r2, =BSYM(panic)
msr ELR_hyp, r2
ldr r0, =\panic_str
+ clrex @ Clear exclusive monitor
eret
.endm
@@ -450,6 +451,7 @@ guest_trap:
4: pop {r0, r1} @ Failed translation, return to guest
mcrr p15, 0, r0, r1, c7 @ PAR
+ clrex
pop {r0, r1, r2}
eret
@@ -476,6 +478,7 @@ switch_to_guest_vfp:
pop {r3-r7}
pop {r0-r2}
+ clrex
eret
#endif
--
1.8.2.3
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 5/5] ARM: KVM: issue a DSB after cache maintainance operations
2013-06-19 13:20 [PATCH 0/5] A handful of KVM/ARM fixes Marc Zyngier
` (3 preceding siblings ...)
2013-06-19 13:20 ` [PATCH 4/5] ARM: KVM: clear exclusive monitor on all exception returns Marc Zyngier
@ 2013-06-19 13:20 ` Marc Zyngier
2013-06-20 10:46 ` Will Deacon
2013-06-20 18:33 ` [PATCH 0/5] A handful of KVM/ARM fixes Christoffer Dall
5 siblings, 1 reply; 24+ messages in thread
From: Marc Zyngier @ 2013-06-19 13:20 UTC (permalink / raw)
To: linux-arm-kernel
When performing the Set/Way cache maintainance operations, it is
important to make sure the operation completes by issueing a DSB.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/coproc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 4a51990..9e6bef4 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -106,6 +106,8 @@ static bool access_dcsw(struct kvm_vcpu *vcpu,
break;
}
+ dsb();
+
done:
put_cpu();
--
1.8.2.3
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs
2013-06-19 13:20 ` [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs Marc Zyngier
@ 2013-06-20 0:05 ` Christoffer Dall
2013-06-20 0:08 ` Christoffer Dall
2013-06-20 10:47 ` Will Deacon
1 sibling, 1 reply; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 0:05 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 19, 2013 at 02:20:03PM +0100, Marc Zyngier wrote:
> When performing a Stage-2 TLB invalidation, it is necessary to
> make sure the write to the page tables is observable by all CPUs.
>
> For this purpose, add a dsb instruction to __kvm_tlb_flush_vmid_ipa
> before doing the TLB invalidation itself.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/interrupts.S | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index d0a8fa3..afa6c04 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -49,6 +49,7 @@ __kvm_hyp_code_start:
> ENTRY(__kvm_tlb_flush_vmid_ipa)
> push {r2, r3}
>
> + dsb
shouldn't this be a dmb then?
thinking that a dsb only ensures completion between load/stores and not
between the preceeding store and the actual invalidate operation?
or are we relying on the fact that the store must complete before the
ldrd below, which must happen before the mcrr, which must happen before
the invalidate (?), and therefore it's all good?
> add r0, r0, #KVM_VTTBR
> ldrd r2, r3, [r0]
> mcrr p15, 6, r2, r3, c2 @ Write VTTBR
> --
> 1.8.2.3
>
>
>
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs
2013-06-20 0:05 ` Christoffer Dall
@ 2013-06-20 0:08 ` Christoffer Dall
0 siblings, 0 replies; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 0:08 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 19, 2013 at 05:05:22PM -0700, Christoffer Dall wrote:
> On Wed, Jun 19, 2013 at 02:20:03PM +0100, Marc Zyngier wrote:
> > When performing a Stage-2 TLB invalidation, it is necessary to
> > make sure the write to the page tables is observable by all CPUs.
> >
> > For this purpose, add a dsb instruction to __kvm_tlb_flush_vmid_ipa
> > before doing the TLB invalidation itself.
> >
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> > arch/arm/kvm/interrupts.S | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> > index d0a8fa3..afa6c04 100644
> > --- a/arch/arm/kvm/interrupts.S
> > +++ b/arch/arm/kvm/interrupts.S
> > @@ -49,6 +49,7 @@ __kvm_hyp_code_start:
> > ENTRY(__kvm_tlb_flush_vmid_ipa)
> > push {r2, r3}
> >
> > + dsb
>
> shouldn't this be a dmb then?
>
> thinking that a dsb only ensures completion between load/stores and not
> between the preceeding store and the actual invalidate operation?
>
> or are we relying on the fact that the store must complete before the
> ldrd below, which must happen before the mcrr, which must happen before
> the invalidate (?), and therefore it's all good?
>
bah, ignore my crazy rambling, I completely switched the two when I read
the definitions.
> > add r0, r0, #KVM_VTTBR
> > ldrd r2, r3, [r0]
> > mcrr p15, 6, r2, r3, c2 @ Write VTTBR
> > --
> > 1.8.2.3
> >
> >
> >
> > _______________________________________________
> > kvmarm mailing list
> > kvmarm at lists.cs.columbia.edu
> > https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-19 13:20 ` [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch Marc Zyngier
@ 2013-06-20 0:18 ` Christoffer Dall
2013-06-20 8:13 ` Marc Zyngier
2013-06-20 10:48 ` Will Deacon
1 sibling, 1 reply; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 0:18 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 19, 2013 at 02:20:04PM +0100, Marc Zyngier wrote:
> We may have preempted the guest while it was performing a maintainance
> operation (TLB invalidation, for example). Make sure it completes
> before we do anything else by adding the necessary barriers.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/interrupts.S | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index afa6c04..3124e0f 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -149,6 +149,15 @@ __kvm_vcpu_return:
> * r0: vcpu pointer
> * r1: exception code
> */
> +
> + /*
> + * We may have preempted the guest while it was performing a
> + * maintainance operation (TLB invalidation, for example). Make
> + * sure it completes before we do anything else.
> + */
Can you explain what could go wrong here without these two instructions?
> + dsb
> + isb
> +
> save_guest_regs
>
> @ Set VMID == 0
> --
> 1.8.2.3
>
>
>
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 4/5] ARM: KVM: clear exclusive monitor on all exception returns
2013-06-19 13:20 ` [PATCH 4/5] ARM: KVM: clear exclusive monitor on all exception returns Marc Zyngier
@ 2013-06-20 0:27 ` Christoffer Dall
2013-06-20 8:29 ` Marc Zyngier
0 siblings, 1 reply; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 0:27 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 19, 2013 at 02:20:05PM +0100, Marc Zyngier wrote:
> Make sure we clear the exclusive movitor on all exception returns,
> which otherwise could lead to lock corruptions.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/interrupts.S | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index 3124e0f..750f051 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -301,6 +301,7 @@ THUMB( orr r2, r2, #PSR_T_BIT )
> ldr r2, =BSYM(panic)
> msr ELR_hyp, r2
> ldr r0, =\panic_str
> + clrex @ Clear exclusive monitor
on a panic? sure, why not ;)
> eret
> .endm
>
> @@ -450,6 +451,7 @@ guest_trap:
>
> 4: pop {r0, r1} @ Failed translation, return to guest
> mcrr p15, 0, r0, r1, c7 @ PAR
> + clrex
I gather this is because any store can potentially leave the system with
an exclusive monitor taken?
patch looks fine.
> pop {r0, r1, r2}
> eret
>
> @@ -476,6 +478,7 @@ switch_to_guest_vfp:
>
> pop {r3-r7}
> pop {r0-r2}
> + clrex
> eret
> #endif
>
> --
> 1.8.2.3
>
>
>
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-20 0:18 ` Christoffer Dall
@ 2013-06-20 8:13 ` Marc Zyngier
2013-06-20 17:14 ` Christoffer Dall
0 siblings, 1 reply; 24+ messages in thread
From: Marc Zyngier @ 2013-06-20 8:13 UTC (permalink / raw)
To: linux-arm-kernel
On 20/06/13 01:18, Christoffer Dall wrote:
> On Wed, Jun 19, 2013 at 02:20:04PM +0100, Marc Zyngier wrote:
>> We may have preempted the guest while it was performing a maintainance
>> operation (TLB invalidation, for example). Make sure it completes
>> before we do anything else by adding the necessary barriers.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/interrupts.S | 9 +++++++++
>> 1 file changed, 9 insertions(+)
>>
>> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
>> index afa6c04..3124e0f 100644
>> --- a/arch/arm/kvm/interrupts.S
>> +++ b/arch/arm/kvm/interrupts.S
>> @@ -149,6 +149,15 @@ __kvm_vcpu_return:
>> * r0: vcpu pointer
>> * r1: exception code
>> */
>> +
>> + /*
>> + * We may have preempted the guest while it was performing a
>> + * maintainance operation (TLB invalidation, for example). Make
>> + * sure it completes before we do anything else.
>> + */
>
> Can you explain what could go wrong here without these two instructions?
There would be no guarantee that the TLB invalidation has effectively
completed, and is visible by other CPUs. Not sure that would be a
massive issue in any decent guest OS, but I thought it was worth plugging.
Another (more serious) thing I had doubts about was that we're about to
switch VMID to restore the host context. The ARM ARM doesn't clearly
specify the interaction between pending TLB maintainance and VMID
switch, and I'm worried that you could end up performing the TLB
maintainance on the *host* TLBs rather than on the guest's.
Having this dsb/isb sequence before switching VMID gives us a strong
guarantee that such a mixup cannot occur.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 4/5] ARM: KVM: clear exclusive monitor on all exception returns
2013-06-20 0:27 ` Christoffer Dall
@ 2013-06-20 8:29 ` Marc Zyngier
0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2013-06-20 8:29 UTC (permalink / raw)
To: linux-arm-kernel
On 20/06/13 01:27, Christoffer Dall wrote:
> On Wed, Jun 19, 2013 at 02:20:05PM +0100, Marc Zyngier wrote:
>> Make sure we clear the exclusive movitor on all exception returns,
>> which otherwise could lead to lock corruptions.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/interrupts.S | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
>> index 3124e0f..750f051 100644
>> --- a/arch/arm/kvm/interrupts.S
>> +++ b/arch/arm/kvm/interrupts.S
>> @@ -301,6 +301,7 @@ THUMB( orr r2, r2, #PSR_T_BIT )
>> ldr r2, =BSYM(panic)
>> msr ELR_hyp, r2
>> ldr r0, =\panic_str
>> + clrex @ Clear exclusive monitor
>
> on a panic? sure, why not ;)
These days, you can have a lot of things going on after a panic: kexec,
for example. You really want to return to the host with a clean state if
at all possible.
>> eret
>> .endm
>>
>> @@ -450,6 +451,7 @@ guest_trap:
>>
>> 4: pop {r0, r1} @ Failed translation, return to guest
>> mcrr p15, 0, r0, r1, c7 @ PAR
>> + clrex
>
> I gather this is because any store can potentially leave the system with
> an exclusive monitor taken?
My scenario was the following:
ldrex -> translation fault, page mapped in Stage-2, read-only
strex -> permission fault
On another vcpu, the page is unmapped (swapped out, for example).
Translation then fails on the permission fault path, and we return to
the guest. But we must make sure that the exclusive monitor is
effectively cleared. Linux wouldn't be affected (me think), but that's
always worth doing.
That's probably one of the thing I like the most about ARMv8: eret
always implies clrex. No questions asked.
> patch looks fine.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 5/5] ARM: KVM: issue a DSB after cache maintainance operations
2013-06-19 13:20 ` [PATCH 5/5] ARM: KVM: issue a DSB after cache maintainance operations Marc Zyngier
@ 2013-06-20 10:46 ` Will Deacon
0 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2013-06-20 10:46 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 19, 2013 at 02:20:06PM +0100, Marc Zyngier wrote:
> When performing the Set/Way cache maintainance operations, it is
> important to make sure the operation completes by issueing a DSB.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/coproc.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index 4a51990..9e6bef4 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -106,6 +106,8 @@ static bool access_dcsw(struct kvm_vcpu *vcpu,
> break;
> }
>
> + dsb();
Worth noting that this can become -ishst once my barriers branch goes
upstream (aiming for 3.12).
Will
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs
2013-06-19 13:20 ` [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs Marc Zyngier
2013-06-20 0:05 ` Christoffer Dall
@ 2013-06-20 10:47 ` Will Deacon
1 sibling, 0 replies; 24+ messages in thread
From: Will Deacon @ 2013-06-20 10:47 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 19, 2013 at 02:20:03PM +0100, Marc Zyngier wrote:
> When performing a Stage-2 TLB invalidation, it is necessary to
> make sure the write to the page tables is observable by all CPUs.
>
> For this purpose, add a dsb instruction to __kvm_tlb_flush_vmid_ipa
> before doing the TLB invalidation itself.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/interrupts.S | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index d0a8fa3..afa6c04 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -49,6 +49,7 @@ __kvm_hyp_code_start:
> ENTRY(__kvm_tlb_flush_vmid_ipa)
> push {r2, r3}
>
> + dsb
This can be dsb ish.
Will
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-19 13:20 ` [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch Marc Zyngier
2013-06-20 0:18 ` Christoffer Dall
@ 2013-06-20 10:48 ` Will Deacon
1 sibling, 0 replies; 24+ messages in thread
From: Will Deacon @ 2013-06-20 10:48 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 19, 2013 at 02:20:04PM +0100, Marc Zyngier wrote:
> We may have preempted the guest while it was performing a maintainance
> operation (TLB invalidation, for example). Make sure it completes
> before we do anything else by adding the necessary barriers.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/interrupts.S | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index afa6c04..3124e0f 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -149,6 +149,15 @@ __kvm_vcpu_return:
> * r0: vcpu pointer
> * r1: exception code
> */
> +
> + /*
> + * We may have preempted the guest while it was performing a
> + * maintainance operation (TLB invalidation, for example). Make
> + * sure it completes before we do anything else.
> + */
> + dsb
Same here; you can use the inner-shareable version.
Will
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-20 8:13 ` Marc Zyngier
@ 2013-06-20 17:14 ` Christoffer Dall
2013-06-20 17:29 ` Marc Zyngier
2013-06-20 18:15 ` Will Deacon
0 siblings, 2 replies; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 17:14 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 20, 2013 at 09:13:22AM +0100, Marc Zyngier wrote:
> On 20/06/13 01:18, Christoffer Dall wrote:
> > On Wed, Jun 19, 2013 at 02:20:04PM +0100, Marc Zyngier wrote:
> >> We may have preempted the guest while it was performing a maintainance
> >> operation (TLB invalidation, for example). Make sure it completes
> >> before we do anything else by adding the necessary barriers.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >> arch/arm/kvm/interrupts.S | 9 +++++++++
> >> 1 file changed, 9 insertions(+)
> >>
> >> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> >> index afa6c04..3124e0f 100644
> >> --- a/arch/arm/kvm/interrupts.S
> >> +++ b/arch/arm/kvm/interrupts.S
> >> @@ -149,6 +149,15 @@ __kvm_vcpu_return:
> >> * r0: vcpu pointer
> >> * r1: exception code
> >> */
> >> +
> >> + /*
> >> + * We may have preempted the guest while it was performing a
> >> + * maintainance operation (TLB invalidation, for example). Make
> >> + * sure it completes before we do anything else.
> >> + */
> >
> > Can you explain what could go wrong here without these two instructions?
>
> There would be no guarantee that the TLB invalidation has effectively
> completed, and is visible by other CPUs. Not sure that would be a
> massive issue in any decent guest OS, but I thought it was worth plugging.
ok, I was trying to think about how it would break, and if a guest needs
a TLB invalidation to be visisble by other CPUs it would have to have a
dsb/isb itself after the operation, and that would eventually be
executed once the VCPU was rescheduled, but potentially on another CPU,
but then I wonder if the PCPU migration on the host wouldn't take care
of it?
It sounds like you're not 100% sure it actually breaks something (or am
I reading it wrong?), but if the performance impact is minor, why not be
on the safe side I guess.
>
> Another (more serious) thing I had doubts about was that we're about to
> switch VMID to restore the host context. The ARM ARM doesn't clearly
> specify the interaction between pending TLB maintainance and VMID
> switch, and I'm worried that you could end up performing the TLB
> maintainance on the *host* TLBs rather than on the guest's.
>
> Having this dsb/isb sequence before switching VMID gives us a strong
> guarantee that such a mixup cannot occur.
>
This is really hurting my brain.
Again, it seems the argument is, why not, and maybe it's required.
And indeed, if it gives us peace of mind, I'm ok with it.
Sorry about this OCD.
-Christoffer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-20 17:14 ` Christoffer Dall
@ 2013-06-20 17:29 ` Marc Zyngier
2013-06-20 18:15 ` Will Deacon
1 sibling, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2013-06-20 17:29 UTC (permalink / raw)
To: linux-arm-kernel
On 20/06/13 18:14, Christoffer Dall wrote:
> On Thu, Jun 20, 2013 at 09:13:22AM +0100, Marc Zyngier wrote:
>> On 20/06/13 01:18, Christoffer Dall wrote:
>>> On Wed, Jun 19, 2013 at 02:20:04PM +0100, Marc Zyngier wrote:
>>>> We may have preempted the guest while it was performing a maintainance
>>>> operation (TLB invalidation, for example). Make sure it completes
>>>> before we do anything else by adding the necessary barriers.
>>>>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> ---
>>>> arch/arm/kvm/interrupts.S | 9 +++++++++
>>>> 1 file changed, 9 insertions(+)
>>>>
>>>> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
>>>> index afa6c04..3124e0f 100644
>>>> --- a/arch/arm/kvm/interrupts.S
>>>> +++ b/arch/arm/kvm/interrupts.S
>>>> @@ -149,6 +149,15 @@ __kvm_vcpu_return:
>>>> * r0: vcpu pointer
>>>> * r1: exception code
>>>> */
>>>> +
>>>> + /*
>>>> + * We may have preempted the guest while it was performing a
>>>> + * maintainance operation (TLB invalidation, for example). Make
>>>> + * sure it completes before we do anything else.
>>>> + */
>>>
>>> Can you explain what could go wrong here without these two instructions?
>>
>> There would be no guarantee that the TLB invalidation has effectively
>> completed, and is visible by other CPUs. Not sure that would be a
>> massive issue in any decent guest OS, but I thought it was worth plugging.
>
> ok, I was trying to think about how it would break, and if a guest needs
> a TLB invalidation to be visisble by other CPUs it would have to have a
> dsb/isb itself after the operation, and that would eventually be
> executed once the VCPU was rescheduled, but potentially on another CPU,
> but then I wonder if the PCPU migration on the host wouldn't take care
> of it?
>
> It sounds like you're not 100% sure it actually breaks something (or am
> I reading it wrong?), but if the performance impact is minor, why not be
> on the safe side I guess.
I think a well written guest wouldn't be affected.
>>
>> Another (more serious) thing I had doubts about was that we're about to
>> switch VMID to restore the host context. The ARM ARM doesn't clearly
>> specify the interaction between pending TLB maintainance and VMID
>> switch, and I'm worried that you could end up performing the TLB
>> maintainance on the *host* TLBs rather than on the guest's.
>>
>> Having this dsb/isb sequence before switching VMID gives us a strong
>> guarantee that such a mixup cannot occur.
>>
> This is really hurting my brain.
>
> Again, it seems the argument is, why not, and maybe it's required.
> And indeed, if it gives us peace of mind, I'm ok with it.
I guess my problem here is that the spec isn't 100% clear about what
happens. Which means a compliant implementation could do things that
would go horribly wrong.
I'm fairly confident that Cortex-A15 doesn't require this. But other
implementations might, and that's what I'm trying to cover here.
> Sorry about this OCD.
No worries.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-20 17:14 ` Christoffer Dall
2013-06-20 17:29 ` Marc Zyngier
@ 2013-06-20 18:15 ` Will Deacon
2013-06-20 18:28 ` Christoffer Dall
1 sibling, 1 reply; 24+ messages in thread
From: Will Deacon @ 2013-06-20 18:15 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 20, 2013 at 06:14:09PM +0100, Christoffer Dall wrote:
> On Thu, Jun 20, 2013 at 09:13:22AM +0100, Marc Zyngier wrote:
> > On 20/06/13 01:18, Christoffer Dall wrote:
> > > On Wed, Jun 19, 2013 at 02:20:04PM +0100, Marc Zyngier wrote:
> > >> We may have preempted the guest while it was performing a maintainance
> > >> operation (TLB invalidation, for example). Make sure it completes
> > >> before we do anything else by adding the necessary barriers.
> > >>
> > >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > >> ---
> > >> arch/arm/kvm/interrupts.S | 9 +++++++++
> > >> 1 file changed, 9 insertions(+)
> > >>
> > >> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> > >> index afa6c04..3124e0f 100644
> > >> --- a/arch/arm/kvm/interrupts.S
> > >> +++ b/arch/arm/kvm/interrupts.S
> > >> @@ -149,6 +149,15 @@ __kvm_vcpu_return:
> > >> * r0: vcpu pointer
> > >> * r1: exception code
> > >> */
> > >> +
> > >> + /*
> > >> + * We may have preempted the guest while it was performing a
> > >> + * maintainance operation (TLB invalidation, for example). Make
> > >> + * sure it completes before we do anything else.
> > >> + */
> > >
> > > Can you explain what could go wrong here without these two instructions?
> >
> > There would be no guarantee that the TLB invalidation has effectively
> > completed, and is visible by other CPUs. Not sure that would be a
> > massive issue in any decent guest OS, but I thought it was worth plugging.
>
> ok, I was trying to think about how it would break, and if a guest needs
> a TLB invalidation to be visisble by other CPUs it would have to have a
> dsb/isb itself after the operation, and that would eventually be
> executed once the VCPU was rescheduled, but potentially on another CPU,
> but then I wonder if the PCPU migration on the host wouldn't take care
> of it?
Actually, it's worse than both of you think :)
The dsb *must* be executed on the same physical CPU as the TLB invalidation.
The same virtual CPU isn't enough, which is all that is guaranteed by the
guest. If you don't have a dsb on your vcpu migration path, then you need
something here.
The same thing applies to cache maintenance operations.
Will
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-20 18:15 ` Will Deacon
@ 2013-06-20 18:28 ` Christoffer Dall
2013-06-20 18:38 ` Will Deacon
0 siblings, 1 reply; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 18:28 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 20, 2013 at 07:15:25PM +0100, Will Deacon wrote:
> On Thu, Jun 20, 2013 at 06:14:09PM +0100, Christoffer Dall wrote:
> > On Thu, Jun 20, 2013 at 09:13:22AM +0100, Marc Zyngier wrote:
> > > On 20/06/13 01:18, Christoffer Dall wrote:
> > > > On Wed, Jun 19, 2013 at 02:20:04PM +0100, Marc Zyngier wrote:
> > > >> We may have preempted the guest while it was performing a maintainance
> > > >> operation (TLB invalidation, for example). Make sure it completes
> > > >> before we do anything else by adding the necessary barriers.
> > > >>
> > > >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > > >> ---
> > > >> arch/arm/kvm/interrupts.S | 9 +++++++++
> > > >> 1 file changed, 9 insertions(+)
> > > >>
> > > >> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> > > >> index afa6c04..3124e0f 100644
> > > >> --- a/arch/arm/kvm/interrupts.S
> > > >> +++ b/arch/arm/kvm/interrupts.S
> > > >> @@ -149,6 +149,15 @@ __kvm_vcpu_return:
> > > >> * r0: vcpu pointer
> > > >> * r1: exception code
> > > >> */
> > > >> +
> > > >> + /*
> > > >> + * We may have preempted the guest while it was performing a
> > > >> + * maintainance operation (TLB invalidation, for example). Make
> > > >> + * sure it completes before we do anything else.
> > > >> + */
> > > >
> > > > Can you explain what could go wrong here without these two instructions?
> > >
> > > There would be no guarantee that the TLB invalidation has effectively
> > > completed, and is visible by other CPUs. Not sure that would be a
> > > massive issue in any decent guest OS, but I thought it was worth plugging.
> >
> > ok, I was trying to think about how it would break, and if a guest needs
> > a TLB invalidation to be visisble by other CPUs it would have to have a
> > dsb/isb itself after the operation, and that would eventually be
> > executed once the VCPU was rescheduled, but potentially on another CPU,
> > but then I wonder if the PCPU migration on the host wouldn't take care
> > of it?
>
> Actually, it's worse than both of you think :)
>
> The dsb *must* be executed on the same physical CPU as the TLB invalidation.
> The same virtual CPU isn't enough, which is all that is guaranteed by the
> guest. If you don't have a dsb on your vcpu migration path, then you need
> something here.
>
> The same thing applies to cache maintenance operations.
>
But are we not sure that a dsb will happen anywhere in the kernel if a
process is migrated to a different core?
-Christoffer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 0/5] A handful of KVM/ARM fixes
2013-06-19 13:20 [PATCH 0/5] A handful of KVM/ARM fixes Marc Zyngier
` (4 preceding siblings ...)
2013-06-19 13:20 ` [PATCH 5/5] ARM: KVM: issue a DSB after cache maintainance operations Marc Zyngier
@ 2013-06-20 18:33 ` Christoffer Dall
2013-06-20 18:41 ` Marc Zyngier
5 siblings, 1 reply; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 18:33 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jun 19, 2013 at 02:20:01PM +0100, Marc Zyngier wrote:
> This is a small patch series that I accumulated while tracking what
> turned out to be an unrelated issue.
>
> The first patch has already been posted a while ago and included here
> for completeness. The others are fairly simple missing barriers/clear
> exclusives that are actually required.
>
> This has been tested on TC2, and is also available in my tree:
> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm/fixes-3.10-rc6
>
Will you update the version with the specific options to the dsb's
pointed out by Will before I merge this?
I'd like to include it for the pull request to kvm/next, which I'm
planning on getting out tomorrow.
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-20 18:28 ` Christoffer Dall
@ 2013-06-20 18:38 ` Will Deacon
2013-06-20 18:50 ` Christoffer Dall
0 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2013-06-20 18:38 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 20, 2013 at 07:28:47PM +0100, Christoffer Dall wrote:
> On Thu, Jun 20, 2013 at 07:15:25PM +0100, Will Deacon wrote:
> > On Thu, Jun 20, 2013 at 06:14:09PM +0100, Christoffer Dall wrote:
> > > ok, I was trying to think about how it would break, and if a guest needs
> > > a TLB invalidation to be visisble by other CPUs it would have to have a
> > > dsb/isb itself after the operation, and that would eventually be
> > > executed once the VCPU was rescheduled, but potentially on another CPU,
> > > but then I wonder if the PCPU migration on the host wouldn't take care
> > > of it?
> >
> > Actually, it's worse than both of you think :)
> >
> > The dsb *must* be executed on the same physical CPU as the TLB invalidation.
> > The same virtual CPU isn't enough, which is all that is guaranteed by the
> > guest. If you don't have a dsb on your vcpu migration path, then you need
> > something here.
> >
> > The same thing applies to cache maintenance operations.
> >
> But are we not sure that a dsb will happen anywhere in the kernel if a
> process is migrated to a different core?
Yes, we have a dsb when we unlock the runqueue for a CPU. That's why Linux
doesn't crash and burn usually. If vcpu migration always goes through the
usual scheduling paths, then you don't have a problem.
Will
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 0/5] A handful of KVM/ARM fixes
2013-06-20 18:33 ` [PATCH 0/5] A handful of KVM/ARM fixes Christoffer Dall
@ 2013-06-20 18:41 ` Marc Zyngier
2013-06-20 18:48 ` Christoffer Dall
0 siblings, 1 reply; 24+ messages in thread
From: Marc Zyngier @ 2013-06-20 18:41 UTC (permalink / raw)
To: linux-arm-kernel
On 20/06/13 19:33, Christoffer Dall wrote:
> On Wed, Jun 19, 2013 at 02:20:01PM +0100, Marc Zyngier wrote:
>> This is a small patch series that I accumulated while tracking what
>> turned out to be an unrelated issue.
>>
>> The first patch has already been posted a while ago and included here
>> for completeness. The others are fairly simple missing barriers/clear
>> exclusives that are actually required.
>>
>> This has been tested on TC2, and is also available in my tree:
>> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm/fixes-3.10-rc6
>>
> Will you update the version with the specific options to the dsb's
> pointed out by Will before I merge this?
>
> I'd like to include it for the pull request to kvm/next, which I'm
> planning on getting out tomorrow.
I was planning to do that tomorrow morning (UK time). Is that OK for you?
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 0/5] A handful of KVM/ARM fixes
2013-06-20 18:41 ` Marc Zyngier
@ 2013-06-20 18:48 ` Christoffer Dall
0 siblings, 0 replies; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 18:48 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 20, 2013 at 07:41:28PM +0100, Marc Zyngier wrote:
> On 20/06/13 19:33, Christoffer Dall wrote:
> > On Wed, Jun 19, 2013 at 02:20:01PM +0100, Marc Zyngier wrote:
> >> This is a small patch series that I accumulated while tracking what
> >> turned out to be an unrelated issue.
> >>
> >> The first patch has already been posted a while ago and included here
> >> for completeness. The others are fairly simple missing barriers/clear
> >> exclusives that are actually required.
> >>
> >> This has been tested on TC2, and is also available in my tree:
> >> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm/fixes-3.10-rc6
> >>
> > Will you update the version with the specific options to the dsb's
> > pointed out by Will before I merge this?
> >
> > I'd like to include it for the pull request to kvm/next, which I'm
> > planning on getting out tomorrow.
>
> I was planning to do that tomorrow morning (UK time). Is that OK for you?
>
Perfect, thanks.
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch
2013-06-20 18:38 ` Will Deacon
@ 2013-06-20 18:50 ` Christoffer Dall
0 siblings, 0 replies; 24+ messages in thread
From: Christoffer Dall @ 2013-06-20 18:50 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 20, 2013 at 07:38:18PM +0100, Will Deacon wrote:
> On Thu, Jun 20, 2013 at 07:28:47PM +0100, Christoffer Dall wrote:
> > On Thu, Jun 20, 2013 at 07:15:25PM +0100, Will Deacon wrote:
> > > On Thu, Jun 20, 2013 at 06:14:09PM +0100, Christoffer Dall wrote:
> > > > ok, I was trying to think about how it would break, and if a guest needs
> > > > a TLB invalidation to be visisble by other CPUs it would have to have a
> > > > dsb/isb itself after the operation, and that would eventually be
> > > > executed once the VCPU was rescheduled, but potentially on another CPU,
> > > > but then I wonder if the PCPU migration on the host wouldn't take care
> > > > of it?
> > >
> > > Actually, it's worse than both of you think :)
> > >
> > > The dsb *must* be executed on the same physical CPU as the TLB invalidation.
> > > The same virtual CPU isn't enough, which is all that is guaranteed by the
> > > guest. If you don't have a dsb on your vcpu migration path, then you need
> > > something here.
> > >
> > > The same thing applies to cache maintenance operations.
> > >
> > But are we not sure that a dsb will happen anywhere in the kernel if a
> > process is migrated to a different core?
>
> Yes, we have a dsb when we unlock the runqueue for a CPU. That's why Linux
> doesn't crash and burn usually. If vcpu migration always goes through the
> usual scheduling paths, then you don't have a problem.
>
Right, a vcpu is simply a thread, a process, so it gets migrated on the
host as any other process.
I gather this means we don't need these, except maybe for the VMID
rollover case, which I honestly didn't fully understand, but maybe it
can be added for that specific case instead?
-Christoffer
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2013-06-20 18:50 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-19 13:20 [PATCH 0/5] A handful of KVM/ARM fixes Marc Zyngier
2013-06-19 13:20 ` [PATCH 1/5] ARM: KVM: perform save/restore of PAR Marc Zyngier
2013-06-19 13:20 ` [PATCH 2/5] ARM: KVM: add missing dsb before invalidating Stage-2 TLBs Marc Zyngier
2013-06-20 0:05 ` Christoffer Dall
2013-06-20 0:08 ` Christoffer Dall
2013-06-20 10:47 ` Will Deacon
2013-06-19 13:20 ` [PATCH 3/5] ARM: KVM: make sure maintainance operation complete before world switch Marc Zyngier
2013-06-20 0:18 ` Christoffer Dall
2013-06-20 8:13 ` Marc Zyngier
2013-06-20 17:14 ` Christoffer Dall
2013-06-20 17:29 ` Marc Zyngier
2013-06-20 18:15 ` Will Deacon
2013-06-20 18:28 ` Christoffer Dall
2013-06-20 18:38 ` Will Deacon
2013-06-20 18:50 ` Christoffer Dall
2013-06-20 10:48 ` Will Deacon
2013-06-19 13:20 ` [PATCH 4/5] ARM: KVM: clear exclusive monitor on all exception returns Marc Zyngier
2013-06-20 0:27 ` Christoffer Dall
2013-06-20 8:29 ` Marc Zyngier
2013-06-19 13:20 ` [PATCH 5/5] ARM: KVM: issue a DSB after cache maintainance operations Marc Zyngier
2013-06-20 10:46 ` Will Deacon
2013-06-20 18:33 ` [PATCH 0/5] A handful of KVM/ARM fixes Christoffer Dall
2013-06-20 18:41 ` Marc Zyngier
2013-06-20 18:48 ` Christoffer Dall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).