* [PATCH 0/4] KVM: PPC: Book3S: PR: Fix with CONFIG_PREEMPT=y
@ 2013-11-29 2:54 Alexander Graf
2013-11-29 2:54 ` [PATCH 1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id Alexander Graf
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Alexander Graf @ 2013-11-29 2:54 UTC (permalink / raw)
To: ", " <kvm-ppc; +Cc: kvm@vger.kernel.org mailing list
When we run book3s pr kvm with CONFIG_PREEMPT=y badness occurs.
The reason is that we don't expect interrups to occur in the exit
path until we hit the point where we enable interrupts (after the
svcpu sync).
However thanks to our manual interrupt replay logic we do get
preempted because the interrupt handler thinks we have interrupts
enabled.
While debugging this I also stumbled over a register clobbering
issue that this patch set also fixes.
I would consider all of these patches for 3.13 with CC on stable
once they're through review.
Alex
Alexander Graf (4):
KVM: PPC: Book3S: PR: Don't clobber our exit handler id
KVM: PPC: Book3S: PR: Export kvmppc_copy_to|from_svcpu
KVM: PPC: Book3S: PR: Make svcpu -> vcpu store preempt savvy
KVM: PPC: Book3S: PR: Enable interrupts earlier
arch/powerpc/include/asm/kvm_book3s.h | 4 ++++
arch/powerpc/include/asm/kvm_book3s_asm.h | 1 +
arch/powerpc/kvm/book3s_interrupts.S | 21 ++++++++++++---------
arch/powerpc/kvm/book3s_pr.c | 22 ++++++++++++++++++++++
arch/powerpc/kvm/book3s_rmhandlers.S | 6 +-----
5 files changed, 40 insertions(+), 14 deletions(-)
--
1.8.1.4
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id
2013-11-29 2:54 [PATCH 0/4] KVM: PPC: Book3S: PR: Fix with CONFIG_PREEMPT=y Alexander Graf
@ 2013-11-29 2:54 ` Alexander Graf
2013-11-30 7:20 ` Paul Mackerras
2013-11-29 2:54 ` [PATCH 2/4] KVM: PPC: Book3S: PR: Export kvmppc_copy_to|from_svcpu Alexander Graf
` (2 subsequent siblings)
3 siblings, 1 reply; 7+ messages in thread
From: Alexander Graf @ 2013-11-29 2:54 UTC (permalink / raw)
To: ", " <kvm-ppc; +Cc: kvm@vger.kernel.org mailing list
We call a C helper to save all svcpu fields into our vcpu. The C
ABI states that r12 is considered volatile. However, we keep our
exit handler id in r12 currently.
So we need to save it away into a non-volatile register instead
that definitely does get preserved across the C call.
This bug usually didn't hit anyone yet since gcc is smart enough
to generate code that doesn't even need r12 which means it stayed
identical throughout the call by sheer luck. But we can't rely on
that.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/book3s_interrupts.S | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index f4dd041..2f7d571 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -132,9 +132,18 @@ kvm_start_lightweight:
*
*/
+ PPC_LL r3, GPR4(r1) /* vcpu pointer */
+
+ /*
+ * kvmppc_copy_from_svcpu can clobber volatile registers, save
+ * r14 to get a spare scratch register for the exit handler id.
+ */
+ PPC_STL r14, VCPU_GPR(R14)(r3)
+ mr r14, r12
+
/* Transfer reg values from shadow vcpu back to vcpu struct */
/* On 64-bit, interrupts are still off at this point */
- PPC_LL r3, GPR4(r1) /* vcpu pointer */
+
GET_SHADOW_VCPU(r4)
bl FUNC(kvmppc_copy_from_svcpu)
nop
@@ -151,13 +160,11 @@ kvm_start_lightweight:
*/
ld r3, PACA_SPRG3(r13)
mtspr SPRN_SPRG3, r3
-
#endif /* CONFIG_PPC_BOOK3S_64 */
/* R7 = vcpu */
PPC_LL r7, GPR4(r1)
- PPC_STL r14, VCPU_GPR(R14)(r7)
PPC_STL r15, VCPU_GPR(R15)(r7)
PPC_STL r16, VCPU_GPR(R16)(r7)
PPC_STL r17, VCPU_GPR(R17)(r7)
@@ -177,7 +184,7 @@ kvm_start_lightweight:
PPC_STL r31, VCPU_GPR(R31)(r7)
/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
- mr r5, r12
+ mr r5, r14
/* Restore r3 (kvm_run) and r4 (vcpu) */
REST_2GPRS(3, r1)
--
1.8.1.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/4] KVM: PPC: Book3S: PR: Export kvmppc_copy_to|from_svcpu
2013-11-29 2:54 [PATCH 0/4] KVM: PPC: Book3S: PR: Fix with CONFIG_PREEMPT=y Alexander Graf
2013-11-29 2:54 ` [PATCH 1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id Alexander Graf
@ 2013-11-29 2:54 ` Alexander Graf
2013-11-29 2:55 ` [PATCH 3/4] KVM: PPC: Book3S: PR: Make svcpu -> vcpu store preempt savvy Alexander Graf
2013-11-29 2:55 ` [PATCH 4/4] KVM: PPC: Book3S: PR: Enable interrupts earlier Alexander Graf
3 siblings, 0 replies; 7+ messages in thread
From: Alexander Graf @ 2013-11-29 2:54 UTC (permalink / raw)
To: ", " <kvm-ppc; +Cc: kvm@vger.kernel.org mailing list
The kvmppc_copy_{to,from}_svcpu functions are publically visible,
so we should also export them in a header for others C files to
consume.
So far we didn't need this because we only called it from asm code.
The next patch will introduce a C caller.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_book3s.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4a594b7..bc23b1b 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -192,6 +192,10 @@ extern void kvmppc_load_up_vsx(void);
extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
extern ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst);
extern int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd);
+extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
+ struct kvm_vcpu *vcpu);
+extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
+ struct kvmppc_book3s_shadow_vcpu *svcpu);
static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
{
--
1.8.1.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/4] KVM: PPC: Book3S: PR: Make svcpu -> vcpu store preempt savvy
2013-11-29 2:54 [PATCH 0/4] KVM: PPC: Book3S: PR: Fix with CONFIG_PREEMPT=y Alexander Graf
2013-11-29 2:54 ` [PATCH 1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id Alexander Graf
2013-11-29 2:54 ` [PATCH 2/4] KVM: PPC: Book3S: PR: Export kvmppc_copy_to|from_svcpu Alexander Graf
@ 2013-11-29 2:55 ` Alexander Graf
2013-11-29 2:55 ` [PATCH 4/4] KVM: PPC: Book3S: PR: Enable interrupts earlier Alexander Graf
3 siblings, 0 replies; 7+ messages in thread
From: Alexander Graf @ 2013-11-29 2:55 UTC (permalink / raw)
To: ", " <kvm-ppc; +Cc: kvm@vger.kernel.org mailing list
As soon as we get back to our "highmem" handler in virtual address
space we may get preempted. Today the reason we can get preempted is
that we replay interrupts and all the lazy logic thinks we have
interrupts enabled.
However, it's not hard to make the code interruptible and that way
we can enable and handle interrupts even earlier.
This fixes random guest crashes that happened with CONFIG_PREEMPT=y
for me.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_book3s_asm.h | 1 +
arch/powerpc/kvm/book3s_pr.c | 22 ++++++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
index 0bd9348..412b2f3 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -106,6 +106,7 @@ struct kvmppc_host_state {
};
struct kvmppc_book3s_shadow_vcpu {
+ bool in_use;
ulong gpr[14];
u32 cr;
u32 xer;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index fe14ca3..5b9e906 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -66,6 +66,7 @@ static void kvmppc_core_vcpu_load_pr(struct kvm_vcpu *vcpu, int cpu)
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb));
svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max;
+ svcpu->in_use = 0;
svcpu_put(svcpu);
#endif
vcpu->cpu = smp_processor_id();
@@ -78,6 +79,9 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
{
#ifdef CONFIG_PPC_BOOK3S_64
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
+ if (svcpu->in_use) {
+ kvmppc_copy_from_svcpu(vcpu, svcpu);
+ }
memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb));
to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max;
svcpu_put(svcpu);
@@ -110,12 +114,26 @@ void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
svcpu->ctr = vcpu->arch.ctr;
svcpu->lr = vcpu->arch.lr;
svcpu->pc = vcpu->arch.pc;
+ svcpu->in_use = true;
}
/* Copy data touched by real-mode code from shadow vcpu back to vcpu */
void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
struct kvmppc_book3s_shadow_vcpu *svcpu)
{
+ /*
+ * vcpu_put would just call us again because in_use hasn't
+ * been updated yet.
+ */
+ preempt_disable();
+
+ /*
+ * Maybe we were already preempted and synced the svcpu from
+ * our preempt notifiers. Don't bother touching this svcpu then.
+ */
+ if (!svcpu->in_use)
+ goto out;
+
vcpu->arch.gpr[0] = svcpu->gpr[0];
vcpu->arch.gpr[1] = svcpu->gpr[1];
vcpu->arch.gpr[2] = svcpu->gpr[2];
@@ -139,6 +157,10 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
vcpu->arch.fault_dar = svcpu->fault_dar;
vcpu->arch.fault_dsisr = svcpu->fault_dsisr;
vcpu->arch.last_inst = svcpu->last_inst;
+ svcpu->in_use = false;
+
+out:
+ preempt_enable();
}
static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
--
1.8.1.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 4/4] KVM: PPC: Book3S: PR: Enable interrupts earlier
2013-11-29 2:54 [PATCH 0/4] KVM: PPC: Book3S: PR: Fix with CONFIG_PREEMPT=y Alexander Graf
` (2 preceding siblings ...)
2013-11-29 2:55 ` [PATCH 3/4] KVM: PPC: Book3S: PR: Make svcpu -> vcpu store preempt savvy Alexander Graf
@ 2013-11-29 2:55 ` Alexander Graf
3 siblings, 0 replies; 7+ messages in thread
From: Alexander Graf @ 2013-11-29 2:55 UTC (permalink / raw)
To: ", " <kvm-ppc; +Cc: kvm@vger.kernel.org mailing list
Now that the svcpu sync is interrupt aware we can enable interrupts
earlier in the exit code path again, moving 32bit and 64bit closer
together.
While at it, document the fact that we're always executing the exit
path with interrupts enabled so that the next person doesn't trap
over this.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/book3s_interrupts.S | 6 +-----
arch/powerpc/kvm/book3s_rmhandlers.S | 6 +-----
2 files changed, 2 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index 2f7d571..a8cc2b2 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -129,6 +129,7 @@ kvm_start_lightweight:
* R12 = exit handler id
* R13 = PACA
* SVCPU.* = guest *
+ * MSR.EE = 1
*
*/
@@ -149,11 +150,6 @@ kvm_start_lightweight:
nop
#ifdef CONFIG_PPC_BOOK3S_64
- /* Re-enable interrupts */
- ld r3, HSTATE_HOST_MSR(r13)
- ori r3, r3, MSR_EE
- MTMSR_EERI(r3)
-
/*
* Reload kernel SPRG3 value.
* No need to save guest value as usermode can't modify SPRG3.
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index a38c4c9..c3c5231 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -153,15 +153,11 @@ _GLOBAL(kvmppc_entry_trampoline)
li r6, MSR_IR | MSR_DR
andc r6, r5, r6 /* Clear DR and IR in MSR value */
-#ifdef CONFIG_PPC_BOOK3S_32
/*
* Set EE in HOST_MSR so that it's enabled when we get into our
- * C exit handler function. On 64-bit we delay enabling
- * interrupts until we have finished transferring stuff
- * to or from the PACA.
+ * C exit handler function.
*/
ori r5, r5, MSR_EE
-#endif
mtsrr0 r7
mtsrr1 r6
RFI
--
1.8.1.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id
2013-11-29 2:54 ` [PATCH 1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id Alexander Graf
@ 2013-11-30 7:20 ` Paul Mackerras
2013-11-30 12:38 ` Alexander Graf
0 siblings, 1 reply; 7+ messages in thread
From: Paul Mackerras @ 2013-11-30 7:20 UTC (permalink / raw)
To: Alexander Graf; +Cc: ", kvm@vger.kernel.org mailing list
On Fri, Nov 29, 2013 at 03:54:58AM +0100, Alexander Graf wrote:
> We call a C helper to save all svcpu fields into our vcpu. The C
> ABI states that r12 is considered volatile. However, we keep our
> exit handler id in r12 currently.
>
> So we need to save it away into a non-volatile register instead
> that definitely does get preserved across the C call.
>
> This bug usually didn't hit anyone yet since gcc is smart enough
> to generate code that doesn't even need r12 which means it stayed
> identical throughout the call by sheer luck. But we can't rely on
> that.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
> arch/powerpc/kvm/book3s_interrupts.S | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
> index f4dd041..2f7d571 100644
> --- a/arch/powerpc/kvm/book3s_interrupts.S
> +++ b/arch/powerpc/kvm/book3s_interrupts.S
> @@ -132,9 +132,18 @@ kvm_start_lightweight:
> *
> */
>
> + PPC_LL r3, GPR4(r1) /* vcpu pointer */
> +
> + /*
> + * kvmppc_copy_from_svcpu can clobber volatile registers, save
> + * r14 to get a spare scratch register for the exit handler id.
> + */
> + PPC_STL r14, VCPU_GPR(R14)(r3)
> + mr r14, r12
In the case where kvmppc_handle_exit_pr returns RESUME_GUEST and we do
the lightweight guest re-entry, I don't see anywhere that we re-load
r14 with the guest value. So I think this results in guest-visible
corruption of r14.
Why not just use the vcpu->arch.trap field? Store r12 to
VCPU_TRAP(r3) here and load r5 from VCPU_TRAP(r4) just before calling
kvmppc_handle_exit_pr().
Regards,
Paul.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id
2013-11-30 7:20 ` Paul Mackerras
@ 2013-11-30 12:38 ` Alexander Graf
0 siblings, 0 replies; 7+ messages in thread
From: Alexander Graf @ 2013-11-30 12:38 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm@vger.kernel.org mailing list
On 30.11.2013, at 08:20, Paul Mackerras <paulus@samba.org> wrote:
> On Fri, Nov 29, 2013 at 03:54:58AM +0100, Alexander Graf wrote:
>> We call a C helper to save all svcpu fields into our vcpu. The C
>> ABI states that r12 is considered volatile. However, we keep our
>> exit handler id in r12 currently.
>>
>> So we need to save it away into a non-volatile register instead
>> that definitely does get preserved across the C call.
>>
>> This bug usually didn't hit anyone yet since gcc is smart enough
>> to generate code that doesn't even need r12 which means it stayed
>> identical throughout the call by sheer luck. But we can't rely on
>> that.
>>
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>> ---
>> arch/powerpc/kvm/book3s_interrupts.S | 15 +++++++++++----
>> 1 file changed, 11 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
>> index f4dd041..2f7d571 100644
>> --- a/arch/powerpc/kvm/book3s_interrupts.S
>> +++ b/arch/powerpc/kvm/book3s_interrupts.S
>> @@ -132,9 +132,18 @@ kvm_start_lightweight:
>> *
>> */
>>
>> + PPC_LL r3, GPR4(r1) /* vcpu pointer */
>> +
>> + /*
>> + * kvmppc_copy_from_svcpu can clobber volatile registers, save
>> + * r14 to get a spare scratch register for the exit handler id.
>> + */
>> + PPC_STL r14, VCPU_GPR(R14)(r3)
>> + mr r14, r12
>
> In the case where kvmppc_handle_exit_pr returns RESUME_GUEST and we do
> the lightweight guest re-entry, I don't see anywhere that we re-load
> r14 with the guest value. So I think this results in guest-visible
> corruption of r14.
Ugh. You're right.
> Why not just use the vcpu->arch.trap field? Store r12 to
> VCPU_TRAP(r3) here and load r5 from VCPU_TRAP(r4) just before calling
> kvmppc_handle_exit_pr().
I was trying to be smart and not do a memory access for a field that I always need. But then again I always do need r14 just as well, so it doesn't actually make a difference. I'll change it.
Thanks a lot!
Alex
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2013-11-30 12:38 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-29 2:54 [PATCH 0/4] KVM: PPC: Book3S: PR: Fix with CONFIG_PREEMPT=y Alexander Graf
2013-11-29 2:54 ` [PATCH 1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id Alexander Graf
2013-11-30 7:20 ` Paul Mackerras
2013-11-30 12:38 ` Alexander Graf
2013-11-29 2:54 ` [PATCH 2/4] KVM: PPC: Book3S: PR: Export kvmppc_copy_to|from_svcpu Alexander Graf
2013-11-29 2:55 ` [PATCH 3/4] KVM: PPC: Book3S: PR: Make svcpu -> vcpu store preempt savvy Alexander Graf
2013-11-29 2:55 ` [PATCH 4/4] KVM: PPC: Book3S: PR: Enable interrupts earlier Alexander Graf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox