* [PATCH 0/6] KVM: PPC: FPU/Altivec/VSX bringup
@ 2010-01-15 13:49 Alexander Graf
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Alexander Graf @ 2010-01-15 13:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: KVM General
Right now the code to use external providers (FPU/Altivec/FSX) is rather hacky.
We just set the respective feature bit in the guest MSR when the guest requests
it and declare it as good. Now, Linux wants to mess around there too, so
whenever a process switch occurs, it saves the external provider state and
reloads the current thread ones'.
Unfortunately, we didn't tell Linux about our guest state. So Linux doesn't even
get the chance to swap any of our registers around which means it ends up
restoring registers from random processes - and we lose all state.
This patchset makes at least FPU and Altivec work. I don't have a VSX machine to
test that extension on. While at it, it also fixes some issues I've stumbled
across during debug.
The basic ideas on how this should work come from Benjamin Herrenschmidt.
Thanks a lot for giving input on this one (and all the other times)!
Alexander Graf (6):
KVM: PPC: Export __giveup_vsx
KVM: PPC: Add helper functions to call real mode loaders
KVM: PPC: Add support for FPU/Altivec/VSX
KVM: PPC: Fix initial GPR settings
KVM: PPC: Keep SRR1 flags around in shadow_msr
KVM: PPC: Move Shadow MSR calculation to function
arch/powerpc/include/asm/kvm_book3s.h | 4 +-
arch/powerpc/include/asm/kvm_host.h | 15 ++-
arch/powerpc/include/asm/kvm_ppc.h | 10 +-
arch/powerpc/kernel/asm-offsets.c | 1 +
arch/powerpc/kernel/ppc_ksyms.c | 1 +
arch/powerpc/kvm/book3s.c | 223 +++++++++++++++++++++++++++++--
arch/powerpc/kvm/book3s_64_exports.c | 7 +
arch/powerpc/kvm/book3s_64_interrupts.S | 2 +-
arch/powerpc/kvm/book3s_64_rmhandlers.S | 34 +++++
9 files changed, 280 insertions(+), 17 deletions(-)
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/6] KVM: PPC: Export __giveup_vsx
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-01-15 13:49 ` Alexander Graf
2010-01-15 13:49 ` [PATCH 3/6] KVM: PPC: Add support for FPU/Altivec/VSX Alexander Graf
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2010-01-15 13:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: KVM General
We need to explicitly only giveup VSX in KVM, so let's export that
specific function to module space.
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
arch/powerpc/kernel/ppc_ksyms.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
index 4254514..ab3e392 100644
--- a/arch/powerpc/kernel/ppc_ksyms.c
+++ b/arch/powerpc/kernel/ppc_ksyms.c
@@ -107,6 +107,7 @@ EXPORT_SYMBOL(giveup_altivec);
#endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_VSX
EXPORT_SYMBOL(giveup_vsx);
+EXPORT_SYMBOL_GPL(__giveup_vsx);
#endif /* CONFIG_VSX */
#ifdef CONFIG_SPE
EXPORT_SYMBOL(giveup_spe);
--
1.6.0.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/6] KVM: PPC: Add helper functions to call real mode loaders
2010-01-15 13:49 [PATCH 0/6] KVM: PPC: FPU/Altivec/VSX bringup Alexander Graf
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-01-15 13:49 ` Alexander Graf
2010-01-15 13:49 ` [PATCH 5/6] KVM: PPC: Keep SRR1 flags around in shadow_msr Alexander Graf
2 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2010-01-15 13:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: KVM General
Linux contains quite some bits of code to load FPU, Altivec and VSX lazily for
a task. It calls those bits in real mode, coming from an interrupt handler.
For KVM we better reuse those, so let's wrap a bit of trampoline magic around
them and then we can call them from normal module code.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_book3s.h | 3 ++
arch/powerpc/kvm/book3s_64_exports.c | 7 ++++++
arch/powerpc/kvm/book3s_64_rmhandlers.S | 34 +++++++++++++++++++++++++++++++
3 files changed, 44 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index c7db69f..e1b441c 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -124,6 +124,9 @@ extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat,
extern u32 kvmppc_trampoline_lowmem;
extern u32 kvmppc_trampoline_enter;
extern void kvmppc_rmcall(ulong srr0, ulong srr1);
+extern void kvmppc_load_up_fpu(void);
+extern void kvmppc_load_up_altivec(void);
+extern void kvmppc_load_up_vsx(void);
static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
{
diff --git a/arch/powerpc/kvm/book3s_64_exports.c b/arch/powerpc/kvm/book3s_64_exports.c
index 99b0712..1dd5a1d 100644
--- a/arch/powerpc/kvm/book3s_64_exports.c
+++ b/arch/powerpc/kvm/book3s_64_exports.c
@@ -23,3 +23,10 @@
EXPORT_SYMBOL_GPL(kvmppc_trampoline_enter);
EXPORT_SYMBOL_GPL(kvmppc_trampoline_lowmem);
EXPORT_SYMBOL_GPL(kvmppc_rmcall);
+EXPORT_SYMBOL_GPL(kvmppc_load_up_fpu);
+#ifdef CONFIG_ALTIVEC
+EXPORT_SYMBOL_GPL(kvmppc_load_up_altivec);
+#endif
+#ifdef CONFIG_VSX
+EXPORT_SYMBOL_GPL(kvmppc_load_up_vsx);
+#endif
diff --git a/arch/powerpc/kvm/book3s_64_rmhandlers.S b/arch/powerpc/kvm/book3s_64_rmhandlers.S
index e7091c9..c83c60a 100644
--- a/arch/powerpc/kvm/book3s_64_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_64_rmhandlers.S
@@ -158,6 +158,40 @@ _GLOBAL(kvmppc_rmcall)
mtsrr1 r4
RFI
+/*
+ * Activate current's external feature (FPU/Altivec/VSX)
+ */
+#define define_load_up(what) \
+ \
+_GLOBAL(kvmppc_load_up_ ## what); \
+ subi r1, r1, INT_FRAME_SIZE; \
+ mflr r3; \
+ std r3, _LINK(r1); \
+ mfmsr r4; \
+ std r31, GPR3(r1); \
+ mr r31, r4; \
+ li r5, MSR_DR; \
+ oris r5, r5, MSR_EE@h; \
+ andc r4, r4, r5; \
+ mtmsr r4; \
+ \
+ bl .load_up_ ## what; \
+ \
+ mtmsr r31; \
+ ld r3, _LINK(r1); \
+ ld r31, GPR3(r1); \
+ addi r1, r1, INT_FRAME_SIZE; \
+ mtlr r3; \
+ blr
+
+define_load_up(fpu)
+#ifdef CONFIG_ALTIVEC
+define_load_up(altivec)
+#endif
+#ifdef CONFIG_VSX
+define_load_up(vsx)
+#endif
+
.global kvmppc_trampoline_lowmem
kvmppc_trampoline_lowmem:
.long kvmppc_handler_lowmem_trampoline - _stext
--
1.6.0.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/6] KVM: PPC: Add support for FPU/Altivec/VSX
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-01-15 13:49 ` [PATCH 1/6] KVM: PPC: Export __giveup_vsx Alexander Graf
@ 2010-01-15 13:49 ` Alexander Graf
2010-01-15 13:49 ` [PATCH 4/6] KVM: PPC: Fix initial GPR settings Alexander Graf
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2010-01-15 13:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: KVM General
When our guest starts using either the FPU, Altivec or VSX we need to make
sure Linux knows about it and sneak into its process switching code
accordingly.
This patch makes accesses to the above parts of the system work inside the
VM.
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
arch/powerpc/include/asm/kvm_host.h | 14 +++-
arch/powerpc/kvm/book3s.c | 193 ++++++++++++++++++++++++++++++++++-
2 files changed, 201 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index f7215e6..c30a70c 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -172,9 +172,20 @@ struct kvm_vcpu_arch {
struct kvmppc_mmu mmu;
#endif
- u64 fpr[32];
ulong gpr[32];
+ u64 fpr[32];
+ u32 fpscr;
+
+#ifdef CONFIG_ALTIVEC
+ vector128 vr[32];
+ vector128 vscr;
+#endif
+
+#ifdef CONFIG_VSX
+ u64 vsr[32];
+#endif
+
ulong pc;
ulong ctr;
ulong lr;
@@ -188,6 +199,7 @@ struct kvm_vcpu_arch {
#ifdef CONFIG_PPC64
ulong shadow_msr;
ulong hflags;
+ ulong guest_owned_ext;
#endif
u32 mmucr;
ulong sprg0;
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 02861fd..2cb1813 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -33,6 +33,9 @@
/* #define EXIT_DEBUG */
/* #define EXIT_DEBUG_SIMPLE */
+/* #define DEBUG_EXT */
+
+static void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr);
struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "exits", VCPU_STAT(sum_exits) },
@@ -77,6 +80,10 @@ void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
memcpy(&to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
sizeof(get_paca()->shadow_vcpu));
to_book3s(vcpu)->slb_shadow_max = get_paca()->kvm_slb_max;
+
+ kvmppc_giveup_ext(vcpu, MSR_FP);
+ kvmppc_giveup_ext(vcpu, MSR_VEC);
+ kvmppc_giveup_ext(vcpu, MSR_VSX);
}
#if defined(EXIT_DEBUG)
@@ -97,9 +104,9 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
msr &= to_book3s(vcpu)->msr_mask;
vcpu->arch.msr = msr;
vcpu->arch.shadow_msr = msr | MSR_USER32;
- vcpu->arch.shadow_msr &= ( MSR_VEC | MSR_VSX | MSR_FP | MSR_FE0 |
- MSR_USER64 | MSR_SE | MSR_BE | MSR_DE |
- MSR_FE1);
+ vcpu->arch.shadow_msr &= (MSR_FE0 | MSR_USER64 | MSR_SE | MSR_BE |
+ MSR_DE | MSR_FE1);
+ vcpu->arch.shadow_msr |= (msr & vcpu->arch.guest_owned_ext);
if (msr & (MSR_WE|MSR_POW)) {
if (!vcpu->arch.pending_exceptions) {
@@ -551,6 +558,117 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
return r;
}
+static inline int get_fpr_index(int i)
+{
+#ifdef CONFIG_VSX
+ i *= 2;
+#endif
+ return i;
+}
+
+/* Give up external provider (FPU, Altivec, VSX) */
+static void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
+{
+ struct thread_struct *t = ¤t->thread;
+ u64 *vcpu_fpr = vcpu->arch.fpr;
+ u64 *vcpu_vsx = vcpu->arch.vsr;
+ u64 *thread_fpr = (u64*)t->fpr;
+ int i;
+
+ if (!(vcpu->arch.guest_owned_ext & msr))
+ return;
+
+#ifdef DEBUG_EXT
+ printk(KERN_INFO "Giving up ext 0x%lx\n", msr);
+#endif
+
+ switch (msr) {
+ case MSR_FP:
+ giveup_fpu(current);
+ for (i = 0; i < ARRAY_SIZE(vcpu->arch.fpr); i++)
+ vcpu_fpr[i] = thread_fpr[get_fpr_index(i)];
+
+ vcpu->arch.fpscr = t->fpscr.val;
+ break;
+ case MSR_VEC:
+#ifdef CONFIG_ALTIVEC
+ giveup_altivec(current);
+ memcpy(vcpu->arch.vr, t->vr, sizeof(vcpu->arch.vr));
+ vcpu->arch.vscr = t->vscr;
+#endif
+ break;
+ case MSR_VSX:
+#ifdef CONFIG_VSX
+ __giveup_vsx(current);
+ for (i = 0; i < ARRAY_SIZE(vcpu->arch.vsr); i++)
+ vcpu_vsx[i] = thread_fpr[get_fpr_index(i) + 1];
+#endif
+ break;
+ default:
+ BUG();
+ }
+
+ vcpu->arch.guest_owned_ext &= ~msr;
+ current->thread.regs->msr &= ~msr;
+ kvmppc_set_msr(vcpu, vcpu->arch.msr);
+}
+
+/* Handle external providers (FPU, Altivec, VSX) */
+static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
+ ulong msr)
+{
+ struct thread_struct *t = ¤t->thread;
+ u64 *vcpu_fpr = vcpu->arch.fpr;
+ u64 *vcpu_vsx = vcpu->arch.vsr;
+ u64 *thread_fpr = (u64*)t->fpr;
+ int i;
+
+ if (!(vcpu->arch.msr & msr)) {
+ kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+ return RESUME_GUEST;
+ }
+
+#ifdef DEBUG_EXT
+ printk(KERN_INFO "Loading up ext 0x%lx\n", msr);
+#endif
+
+ current->thread.regs->msr |= msr;
+
+ switch (msr) {
+ case MSR_FP:
+ for (i = 0; i < ARRAY_SIZE(vcpu->arch.fpr); i++)
+ thread_fpr[get_fpr_index(i)] = vcpu_fpr[i];
+
+ t->fpscr.val = vcpu->arch.fpscr;
+ t->fpexc_mode = 0;
+ kvmppc_load_up_fpu();
+ break;
+ case MSR_VEC:
+#ifdef CONFIG_ALTIVEC
+ memcpy(t->vr, vcpu->arch.vr, sizeof(vcpu->arch.vr));
+ t->vscr = vcpu->arch.vscr;
+ t->vrsave = -1;
+ kvmppc_load_up_altivec();
+#endif
+ break;
+ case MSR_VSX:
+#ifdef CONFIG_VSX
+ for (i = 0; i < ARRAY_SIZE(vcpu->arch.vsr); i++)
+ thread_fpr[get_fpr_index(i) + 1] = vcpu_vsx[i];
+ kvmppc_load_up_vsx();
+#endif
+ break;
+ default:
+ BUG();
+ }
+
+ vcpu->arch.guest_owned_ext |= msr;
+
+ kvmppc_set_msr(vcpu, vcpu->arch.msr);
+
+ return RESUME_GUEST;
+}
+
int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int exit_nr)
{
@@ -674,11 +792,17 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
r = RESUME_GUEST;
break;
- case BOOK3S_INTERRUPT_MACHINE_CHECK:
case BOOK3S_INTERRUPT_FP_UNAVAIL:
- case BOOK3S_INTERRUPT_TRACE:
+ r = kvmppc_handle_ext(vcpu, exit_nr, MSR_FP);
+ break;
case BOOK3S_INTERRUPT_ALTIVEC:
+ r = kvmppc_handle_ext(vcpu, exit_nr, MSR_VEC);
+ break;
case BOOK3S_INTERRUPT_VSX:
+ r = kvmppc_handle_ext(vcpu, exit_nr, MSR_VSX);
+ break;
+ case BOOK3S_INTERRUPT_MACHINE_CHECK:
+ case BOOK3S_INTERRUPT_TRACE:
kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
r = RESUME_GUEST;
break;
@@ -959,6 +1083,10 @@ extern int __kvmppc_vcpu_entry(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
{
int ret;
+ struct thread_struct ext_bkp;
+ bool save_vec = current->thread.used_vr;
+ bool save_vsx = current->thread.used_vsr;
+ ulong ext_msr;
/* No need to go into the guest when all we do is going out */
if (signal_pending(current)) {
@@ -966,6 +1094,35 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
return -EINTR;
}
+ /* Save FPU state in stack */
+ if (current->thread.regs->msr & MSR_FP)
+ giveup_fpu(current);
+ memcpy(ext_bkp.fpr, current->thread.fpr, sizeof(current->thread.fpr));
+ ext_bkp.fpscr = current->thread.fpscr;
+ ext_bkp.fpexc_mode = current->thread.fpexc_mode;
+
+#ifdef CONFIG_ALTIVEC
+ /* Save Altivec state in stack */
+ if (save_vec) {
+ if (current->thread.regs->msr & MSR_VEC)
+ giveup_altivec(current);
+ memcpy(ext_bkp.vr, current->thread.vr, sizeof(ext_bkp.vr));
+ ext_bkp.vscr = current->thread.vscr;
+ ext_bkp.vrsave = current->thread.vrsave;
+ }
+ ext_bkp.used_vr = current->thread.used_vr;
+#endif
+
+#ifdef CONFIG_VSX
+ /* Save VSX state in stack */
+ if (save_vsx && (current->thread.regs->msr & MSR_VSX))
+ __giveup_vsx(current);
+ ext_bkp.used_vsr = current->thread.used_vsr;
+#endif
+
+ /* Remember the MSR with disabled extensions */
+ ext_msr = current->thread.regs->msr;
+
/* XXX we get called with irq disabled - change that! */
local_irq_enable();
@@ -973,6 +1130,32 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
local_irq_disable();
+ current->thread.regs->msr = ext_msr;
+
+ /* Make sure we save the guest FPU/Altivec/VSX state */
+ kvmppc_giveup_ext(vcpu, MSR_FP);
+ kvmppc_giveup_ext(vcpu, MSR_VEC);
+ kvmppc_giveup_ext(vcpu, MSR_VSX);
+
+ /* Restore FPU state from stack */
+ memcpy(current->thread.fpr, ext_bkp.fpr, sizeof(ext_bkp.fpr));
+ current->thread.fpscr = ext_bkp.fpscr;
+ current->thread.fpexc_mode = ext_bkp.fpexc_mode;
+
+#ifdef CONFIG_ALTIVEC
+ /* Restore Altivec state from stack */
+ if (save_vec && current->thread.used_vr) {
+ memcpy(current->thread.vr, ext_bkp.vr, sizeof(ext_bkp.vr));
+ current->thread.vscr = ext_bkp.vscr;
+ current->thread.vrsave= ext_bkp.vrsave;
+ }
+ current->thread.used_vr = ext_bkp.used_vr;
+#endif
+
+#ifdef CONFIG_VSX
+ current->thread.used_vsr = ext_bkp.used_vsr;
+#endif
+
return ret;
}
--
1.6.0.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 4/6] KVM: PPC: Fix initial GPR settings
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-01-15 13:49 ` [PATCH 1/6] KVM: PPC: Export __giveup_vsx Alexander Graf
2010-01-15 13:49 ` [PATCH 3/6] KVM: PPC: Add support for FPU/Altivec/VSX Alexander Graf
@ 2010-01-15 13:49 ` Alexander Graf
2010-01-15 13:49 ` [PATCH 6/6] KVM: PPC: Move Shadow MSR calculation to function Alexander Graf
2010-01-17 12:33 ` [PATCH 0/6] KVM: PPC: FPU/Altivec/VSX bringup Avi Kivity
4 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2010-01-15 13:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: KVM General
Commit 7d01b4c3ed2bb33ceaf2d270cb4831a67a76b51b introduced PACA backed vcpu
values. With this patch, when a userspace app was setting GPRs before it was
actually first loaded, the set values get discarded.
This is because vcpu_load loads them from the vcpu backing store that we use
whenever we're not owning the PACA.
That behavior is not really a major problem, because we don't need it for
qemu. Other users (like kvmctl) do have problems with it though, so let's
better do it right.
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
arch/powerpc/include/asm/kvm_book3s.h | 1 -
arch/powerpc/include/asm/kvm_ppc.h | 10 ++++++++--
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index e1b441c..db7db0a 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -22,7 +22,6 @@
#include <linux/types.h>
#include <linux/kvm_host.h>
-#include <asm/kvm_ppc.h>
#include <asm/kvm_book3s_64_asm.h>
struct kvmppc_slb {
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 09816da..e264282 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -28,6 +28,9 @@
#include <linux/types.h>
#include <linux/kvm_types.h>
#include <linux/kvm_host.h>
+#ifdef CONFIG_PPC_BOOK3S
+#include <asm/kvm_book3s.h>
+#endif
enum emulation_result {
EMULATE_DONE, /* no further processing */
@@ -102,9 +105,10 @@ extern void kvmppc_core_destroy_mmu(struct kvm_vcpu *vcpu);
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
- if ( num < 14 )
+ if ( num < 14 ) {
get_paca()->shadow_vcpu.gpr[num] = val;
- else
+ to_book3s(vcpu)->shadow_vcpu.gpr[num] = val;
+ } else
vcpu->arch.gpr[num] = val;
}
@@ -119,6 +123,7 @@ static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.cr = val;
+ to_book3s(vcpu)->shadow_vcpu.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
@@ -129,6 +134,7 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.xer = val;
+ to_book3s(vcpu)->shadow_vcpu.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
--
1.6.0.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 5/6] KVM: PPC: Keep SRR1 flags around in shadow_msr
2010-01-15 13:49 [PATCH 0/6] KVM: PPC: FPU/Altivec/VSX bringup Alexander Graf
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-01-15 13:49 ` [PATCH 2/6] KVM: PPC: Add helper functions to call real mode loaders Alexander Graf
@ 2010-01-15 13:49 ` Alexander Graf
2 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2010-01-15 13:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: KVM General
SRR1 stores more information that just the MSR value. It also stores
valuable information about the type of interrupt we received, for
example whether the storage interrupt we just got was because of a
missing htab entry or not.
We use that information to speed up the exit path.
Now if we get preempted before we can interpret the shadow_msr values,
we get into vcpu_put which then calls the MSR handler, which then sets
all the SRR1 information bits in shadow_msr to 0. Great.
So let's preserve the SRR1 specific bits in shadow_msr whenever we set
the MSR. They don't hurt.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/kernel/asm-offsets.c | 1 +
arch/powerpc/kvm/book3s.c | 13 +++++++------
arch/powerpc/kvm/book3s_64_interrupts.S | 2 +-
4 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index c30a70c..715aa6b 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -198,6 +198,7 @@ struct kvm_vcpu_arch {
ulong msr;
#ifdef CONFIG_PPC64
ulong shadow_msr;
+ ulong shadow_srr1;
ulong hflags;
ulong guest_owned_ext;
#endif
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index ee99354..957ceb7 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -433,6 +433,7 @@ int main(void)
DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr));
DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
+ DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem));
DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter));
DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler));
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 2cb1813..58f5200 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -524,14 +524,14 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
/* Page not found in guest PTE entries */
vcpu->arch.dear = vcpu->arch.fault_dear;
to_book3s(vcpu)->dsisr = vcpu->arch.fault_dsisr;
- vcpu->arch.msr |= (vcpu->arch.shadow_msr & 0x00000000f8000000ULL);
+ vcpu->arch.msr |= (vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL);
kvmppc_book3s_queue_irqprio(vcpu, vec);
} else if (page_found == -EPERM) {
/* Storage protection */
vcpu->arch.dear = vcpu->arch.fault_dear;
to_book3s(vcpu)->dsisr = vcpu->arch.fault_dsisr & ~DSISR_NOHPTE;
to_book3s(vcpu)->dsisr |= DSISR_PROTFAULT;
- vcpu->arch.msr |= (vcpu->arch.shadow_msr & 0x00000000f8000000ULL);
+ vcpu->arch.msr |= (vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL);
kvmppc_book3s_queue_irqprio(vcpu, vec);
} else if (page_found == -EINVAL) {
/* Page not found in guest SLB */
@@ -693,7 +693,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
case BOOK3S_INTERRUPT_INST_STORAGE:
vcpu->stat.pf_instruc++;
/* only care about PTEG not found errors, but leave NX alone */
- if (vcpu->arch.shadow_msr & 0x40000000) {
+ if (vcpu->arch.shadow_srr1 & 0x40000000) {
r = kvmppc_handle_pagefault(run, vcpu, vcpu->arch.pc, exit_nr);
vcpu->stat.sp_instruc++;
} else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
@@ -705,7 +705,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
*/
kvmppc_mmu_pte_flush(vcpu, vcpu->arch.pc, ~0xFFFULL);
} else {
- vcpu->arch.msr |= (vcpu->arch.shadow_msr & 0x58000000);
+ vcpu->arch.msr |= vcpu->arch.shadow_srr1 & 0x58000000;
kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
kvmppc_mmu_pte_flush(vcpu, vcpu->arch.pc, ~0xFFFULL);
r = RESUME_GUEST;
@@ -753,7 +753,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
enum emulation_result er;
ulong flags;
- flags = (vcpu->arch.shadow_msr & 0x1f0000ull);
+ flags = vcpu->arch.shadow_srr1 & 0x1f0000ull;
if (vcpu->arch.msr & MSR_PR) {
#ifdef EXIT_DEBUG
@@ -808,7 +808,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
break;
default:
/* Ugh - bork here! What did we get? */
- printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | msr=0x%lx\n", exit_nr, vcpu->arch.pc, vcpu->arch.shadow_msr);
+ printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | msr=0x%lx\n",
+ exit_nr, vcpu->arch.pc, vcpu->arch.shadow_srr1);
r = RESUME_HOST;
BUG();
break;
diff --git a/arch/powerpc/kvm/book3s_64_interrupts.S b/arch/powerpc/kvm/book3s_64_interrupts.S
index 2ff0b21..c1584d0 100644
--- a/arch/powerpc/kvm/book3s_64_interrupts.S
+++ b/arch/powerpc/kvm/book3s_64_interrupts.S
@@ -169,7 +169,7 @@ kvmppc_handler_highmem:
stw r0, VCPU_LAST_INST(r7)
std r3, VCPU_PC(r7)
- std r4, VCPU_SHADOW_MSR(r7)
+ std r4, VCPU_SHADOW_SRR1(r7)
std r5, VCPU_FAULT_DEAR(r7)
std r6, VCPU_FAULT_DSISR(r7)
--
1.6.0.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 6/6] KVM: PPC: Move Shadow MSR calculation to function
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
` (2 preceding siblings ...)
2010-01-15 13:49 ` [PATCH 4/6] KVM: PPC: Fix initial GPR settings Alexander Graf
@ 2010-01-15 13:49 ` Alexander Graf
2010-01-17 12:33 ` [PATCH 0/6] KVM: PPC: FPU/Altivec/VSX bringup Avi Kivity
4 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2010-01-15 13:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: KVM General
We keep a copy of the MSR around that we use when we go into the guest context.
That copy is basically the normal process MSR flags OR some allowed guest
specified MSR flags. We also AND the external providers into this, so we get
traps on FPU usage when we haven't activated it on the host yet.
Currently this calculation is part of the set_msr function that we use whenever
we set the guest MSR value. With the external providers, we also have the case
that we don't modify the guest's MSR, but only want to update the shadow MSR.
So let's move the shadow MSR parts to a separate function that we then use
whenever we only need to update it. That way we don't accidently kvm_vcpu_block
within a preempt notifier context.
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
arch/powerpc/kvm/book3s.c | 27 +++++++++++++++++++++------
1 files changed, 21 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 58f5200..9a271f0 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -94,6 +94,23 @@ static u32 kvmppc_get_dec(struct kvm_vcpu *vcpu)
}
#endif
+static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.shadow_msr = vcpu->arch.msr;
+ /* Guest MSR values */
+ vcpu->arch.shadow_msr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE |
+ MSR_BE | MSR_DE;
+ /* Process MSR values */
+ vcpu->arch.shadow_msr |= MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_PR |
+ MSR_EE;
+ /* External providers the guest reserved */
+ vcpu->arch.shadow_msr |= (vcpu->arch.msr & vcpu->arch.guest_owned_ext);
+ /* 64-bit Process MSR values */
+#ifdef CONFIG_PPC_BOOK3S_64
+ vcpu->arch.shadow_msr |= MSR_ISF | MSR_HV;
+#endif
+}
+
void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
{
ulong old_msr = vcpu->arch.msr;
@@ -101,12 +118,10 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
#ifdef EXIT_DEBUG
printk(KERN_INFO "KVM: Set MSR to 0x%llx\n", msr);
#endif
+
msr &= to_book3s(vcpu)->msr_mask;
vcpu->arch.msr = msr;
- vcpu->arch.shadow_msr = msr | MSR_USER32;
- vcpu->arch.shadow_msr &= (MSR_FE0 | MSR_USER64 | MSR_SE | MSR_BE |
- MSR_DE | MSR_FE1);
- vcpu->arch.shadow_msr |= (msr & vcpu->arch.guest_owned_ext);
+ kvmppc_recalc_shadow_msr(vcpu);
if (msr & (MSR_WE|MSR_POW)) {
if (!vcpu->arch.pending_exceptions) {
@@ -610,7 +625,7 @@ static void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
vcpu->arch.guest_owned_ext &= ~msr;
current->thread.regs->msr &= ~msr;
- kvmppc_set_msr(vcpu, vcpu->arch.msr);
+ kvmppc_recalc_shadow_msr(vcpu);
}
/* Handle external providers (FPU, Altivec, VSX) */
@@ -664,7 +679,7 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
vcpu->arch.guest_owned_ext |= msr;
- kvmppc_set_msr(vcpu, vcpu->arch.msr);
+ kvmppc_recalc_shadow_msr(vcpu);
return RESUME_GUEST;
}
--
1.6.0.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 0/6] KVM: PPC: FPU/Altivec/VSX bringup
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
` (3 preceding siblings ...)
2010-01-15 13:49 ` [PATCH 6/6] KVM: PPC: Move Shadow MSR calculation to function Alexander Graf
@ 2010-01-17 12:33 ` Avi Kivity
4 siblings, 0 replies; 8+ messages in thread
From: Avi Kivity @ 2010-01-17 12:33 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, KVM General
On 01/15/2010 03:49 PM, Alexander Graf wrote:
> Right now the code to use external providers (FPU/Altivec/FSX) is rather hacky.
>
> We just set the respective feature bit in the guest MSR when the guest requests
> it and declare it as good. Now, Linux wants to mess around there too, so
> whenever a process switch occurs, it saves the external provider state and
> reloads the current thread ones'.
>
> Unfortunately, we didn't tell Linux about our guest state. So Linux doesn't even
> get the chance to swap any of our registers around which means it ends up
> restoring registers from random processes - and we lose all state.
>
> This patchset makes at least FPU and Altivec work. I don't have a VSX machine to
> test that extension on. While at it, it also fixes some issues I've stumbled
> across during debug.
>
> The basic ideas on how this should work come from Benjamin Herrenschmidt.
> Thanks a lot for giving input on this one (and all the other times)!
>
Applied all, thanks.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2010-01-17 12:33 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-01-15 13:49 [PATCH 0/6] KVM: PPC: FPU/Altivec/VSX bringup Alexander Graf
[not found] ` <1263563354-11075-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-01-15 13:49 ` [PATCH 1/6] KVM: PPC: Export __giveup_vsx Alexander Graf
2010-01-15 13:49 ` [PATCH 3/6] KVM: PPC: Add support for FPU/Altivec/VSX Alexander Graf
2010-01-15 13:49 ` [PATCH 4/6] KVM: PPC: Fix initial GPR settings Alexander Graf
2010-01-15 13:49 ` [PATCH 6/6] KVM: PPC: Move Shadow MSR calculation to function Alexander Graf
2010-01-17 12:33 ` [PATCH 0/6] KVM: PPC: FPU/Altivec/VSX bringup Avi Kivity
2010-01-15 13:49 ` [PATCH 2/6] KVM: PPC: Add helper functions to call real mode loaders Alexander Graf
2010-01-15 13:49 ` [PATCH 5/6] KVM: PPC: Keep SRR1 flags around in shadow_msr Alexander Graf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).