* [PATCH 0/8] PR KVM fixes and improvements
@ 2013-07-11 11:48 Paul Mackerras
2013-07-11 11:49 ` [PATCH 1/8] KVM: PPC: Book3S PR: Load up SPRG3 register with guest value on guest entry Paul Mackerras
` (7 more replies)
0 siblings, 8 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:48 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
This series fixes some problems in PR KVM and adds support for using
64kB pages, both on the guest side, and also on the host side if the
host kernel is configured with a 64kB page size. Finally this makes
the HPT code SMP-safe using a mutex, which means that PR KVM can now
run SMP guests.
This also includes a patch that simplifies things by keeping volatile
register values in the kvm_vcpu struct most of the time instead of the
shadow_vcpu. Doing this means a little more work on guest entry/exit,
but timing measurements of one million mfpvr instructions showed no
statistically significant slowdown.
The series is against Alex Graf's kvm-ppc-queue branch.
Paul.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/8] KVM: PPC: Book3S PR: Load up SPRG3 register with guest value on guest entry
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
@ 2013-07-11 11:49 ` Paul Mackerras
2013-07-25 13:38 ` Alexander Graf
2013-07-11 11:50 ` [PATCH 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu Paul Mackerras
` (6 subsequent siblings)
7 siblings, 1 reply; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:49 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
Unlike the other general-purpose SPRs, SPRG3 can be read by usermode
code, and is used in recent kernels to store the CPU and NUMA node
numbers so that they can be read by VDSO functions. Thus we need to
load the guest's SPRG3 value into the real SPRG3 register when entering
the guest, and restore the host's value when exiting the guest. We don't
need to save the guest SPRG3 value when exiting the guest as usermode
code can't modify SPRG3.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kernel/asm-offsets.c | 1 +
arch/powerpc/kvm/book3s_interrupts.S | 14 ++++++++++++++
2 files changed, 15 insertions(+)
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 6f16ffa..a67c76e 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -452,6 +452,7 @@ int main(void)
DEFINE(VCPU_SPRG2, offsetof(struct kvm_vcpu, arch.shregs.sprg2));
DEFINE(VCPU_SPRG3, offsetof(struct kvm_vcpu, arch.shregs.sprg3));
#endif
+ DEFINE(VCPU_SHARED_SPRG3, offsetof(struct kvm_vcpu_arch_shared, sprg3));
DEFINE(VCPU_SHARED_SPRG4, offsetof(struct kvm_vcpu_arch_shared, sprg4));
DEFINE(VCPU_SHARED_SPRG5, offsetof(struct kvm_vcpu_arch_shared, sprg5));
DEFINE(VCPU_SHARED_SPRG6, offsetof(struct kvm_vcpu_arch_shared, sprg6));
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index 48cbbf8..17cfae5 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -92,6 +92,11 @@ kvm_start_lightweight:
PPC_LL r3, VCPU_HFLAGS(r4)
rldicl r3, r3, 0, 63 /* r3 &= 1 */
stb r3, HSTATE_RESTORE_HID5(r13)
+
+ /* Load up guest SPRG3 value, since it's user readable */
+ ld r3, VCPU_SHARED(r4)
+ ld r3, VCPU_SHARED_SPRG3(r3)
+ mtspr SPRN_SPRG3, r3
#endif /* CONFIG_PPC_BOOK3S_64 */
PPC_LL r4, VCPU_SHADOW_MSR(r4) /* get shadow_msr */
@@ -123,6 +128,15 @@ kvmppc_handler_highmem:
/* R7 = vcpu */
PPC_LL r7, GPR4(r1)
+#ifdef CONFIG_PPC_BOOK3S_64
+ /*
+ * Reload kernel SPRG3 value.
+ * No need to save guest value as usermode can't modify SPRG3.
+ */
+ ld r3, PACA_SPRG3(r13)
+ mtspr SPRN_SPRG3, r3
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
PPC_STL r14, VCPU_GPR(R14)(r7)
PPC_STL r15, VCPU_GPR(R15)(r7)
PPC_STL r16, VCPU_GPR(R16)(r7)
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
2013-07-11 11:49 ` [PATCH 1/8] KVM: PPC: Book3S PR: Load up SPRG3 register with guest value on guest entry Paul Mackerras
@ 2013-07-11 11:50 ` Paul Mackerras
2013-07-13 12:21 ` [PATCH v2 " Paul Mackerras
2013-07-25 13:54 ` [PATCH " Alexander Graf
2013-07-11 11:51 ` [PATCH 3/8] KVM: PPC: Book3S PR: Rework kvmppc_mmu_book3s_64_xlate() Paul Mackerras
` (5 subsequent siblings)
7 siblings, 2 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:50 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 478.2ms to 480.1ms (averages of 4 values), a difference which is
not statistically significant given the variability of the results.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/include/asm/kvm_book3s.h | 193 +++++-------------------------
arch/powerpc/include/asm/kvm_book3s_asm.h | 6 +-
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/kernel/asm-offsets.c | 3 +-
arch/powerpc/kvm/book3s_emulate.c | 8 +-
arch/powerpc/kvm/book3s_interrupts.S | 101 ++++++++++++++++
arch/powerpc/kvm/book3s_pr.c | 68 +++++------
arch/powerpc/kvm/book3s_rmhandlers.S | 5 -
arch/powerpc/kvm/trace.h | 7 +-
9 files changed, 175 insertions(+), 217 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 08891d0..5d68f6c 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -198,149 +198,97 @@ extern void kvm_return_point(void);
#include <asm/kvm_book3s_64.h>
#endif
-#ifdef CONFIG_KVM_BOOK3S_PR
-
-static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
-{
- return to_book3s(vcpu)->hior;
-}
-
-static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
- unsigned long pending_now, unsigned long old_pending)
-{
- if (pending_now)
- vcpu->arch.shared->int_pending = 1;
- else if (old_pending)
- vcpu->arch.shared->int_pending = 0;
-}
-
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
- if ( num < 14 ) {
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->gpr[num] = val;
- svcpu_put(svcpu);
- to_book3s(vcpu)->shadow_vcpu->gpr[num] = val;
- } else
- vcpu->arch.gpr[num] = val;
+ vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
- if ( num < 14 ) {
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r = svcpu->gpr[num];
- svcpu_put(svcpu);
- return r;
- } else
- return vcpu->arch.gpr[num];
+ return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->cr = val;
- svcpu_put(svcpu);
- to_book3s(vcpu)->shadow_vcpu->cr = val;
+ vcpu->arch.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- u32 r;
- r = svcpu->cr;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->xer = val;
- to_book3s(vcpu)->shadow_vcpu->xer = val;
- svcpu_put(svcpu);
+ vcpu->arch.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- u32 r;
- r = svcpu->xer;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.xer;
}
static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->ctr = val;
- svcpu_put(svcpu);
+ vcpu->arch.ctr = val;
}
static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r;
- r = svcpu->ctr;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.ctr;
}
static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->lr = val;
- svcpu_put(svcpu);
+ vcpu->arch.lr = val;
}
static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r;
- r = svcpu->lr;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.lr;
}
static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->pc = val;
- svcpu_put(svcpu);
+ vcpu->arch.pc = val;
}
static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r;
- r = svcpu->pc;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.pc;
}
static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
{
ulong pc = kvmppc_get_pc(vcpu);
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- u32 r;
/* Load the instruction manually if it failed to do so in the
* exit path */
- if (svcpu->last_inst == KVM_INST_FETCH_FAILED)
- kvmppc_ld(vcpu, &pc, sizeof(u32), &svcpu->last_inst, false);
+ if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
+ kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
- r = svcpu->last_inst;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.last_inst;
}
static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r;
- r = svcpu->fault_dar;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.fault_dar;
+}
+
+#ifdef CONFIG_KVM_BOOK3S_PR
+
+static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
+{
+ return to_book3s(vcpu)->hior;
+}
+
+static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
+ unsigned long pending_now, unsigned long old_pending)
+{
+ if (pending_now)
+ vcpu->arch.shared->int_pending = 1;
+ else if (old_pending)
+ vcpu->arch.shared->int_pending = 0;
}
static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
@@ -374,83 +322,6 @@ static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
{
}
-static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
-{
- vcpu->arch.gpr[num] = val;
-}
-
-static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
-{
- return vcpu->arch.gpr[num];
-}
-
-static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
-{
- vcpu->arch.cr = val;
-}
-
-static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.cr;
-}
-
-static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
-{
- vcpu->arch.xer = val;
-}
-
-static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.xer;
-}
-
-static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
-{
- vcpu->arch.ctr = val;
-}
-
-static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.ctr;
-}
-
-static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
-{
- vcpu->arch.lr = val;
-}
-
-static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.lr;
-}
-
-static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
-{
- vcpu->arch.pc = val;
-}
-
-static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.pc;
-}
-
-static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
-{
- ulong pc = kvmppc_get_pc(vcpu);
-
- /* Load the instruction manually if it failed to do so in the
- * exit path */
- if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
- kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
-
- return vcpu->arch.last_inst;
-}
-
-static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.fault_dar;
-}
-
static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
{
return false;
diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
index 9039d3c..4141409 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -108,14 +108,14 @@ struct kvmppc_book3s_shadow_vcpu {
ulong gpr[14];
u32 cr;
u32 xer;
-
- u32 fault_dsisr;
- u32 last_inst;
ulong ctr;
ulong lr;
ulong pc;
+
ulong shadow_srr1;
ulong fault_dar;
+ u32 fault_dsisr;
+ u32 last_inst;
#ifdef CONFIG_PPC_BOOK3S_32
u32 sr[16]; /* Guest SRs */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 3328353..7b26395 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -463,6 +463,7 @@ struct kvm_vcpu_arch {
u32 ctrl;
ulong dabr;
ulong cfar;
+ ulong shadow_srr1;
#endif
u32 vrsave; /* also USPRG0 */
u32 mmucr;
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index a67c76e..936d7cf 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -462,6 +462,7 @@ int main(void)
DEFINE(VCPU_SHARED, offsetof(struct kvm_vcpu, arch.shared));
DEFINE(VCPU_SHARED_MSR, offsetof(struct kvm_vcpu_arch_shared, msr));
DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
+ DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
DEFINE(VCPU_SHARED_MAS0, offsetof(struct kvm_vcpu_arch_shared, mas0));
DEFINE(VCPU_SHARED_MAS1, offsetof(struct kvm_vcpu_arch_shared, mas1));
@@ -519,8 +520,6 @@ int main(void)
DEFINE(VCORE_NAP_COUNT, offsetof(struct kvmppc_vcore, nap_count));
DEFINE(VCORE_IN_GUEST, offsetof(struct kvmppc_vcore, in_guest));
DEFINE(VCORE_NAPPING_THREADS, offsetof(struct kvmppc_vcore, napping_threads));
- DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
- offsetof(struct kvmppc_vcpu_book3s, vcpu));
DEFINE(VCPU_SLB_E, offsetof(struct kvmppc_slb, orige));
DEFINE(VCPU_SLB_V, offsetof(struct kvmppc_slb, origv));
DEFINE(VCPU_SLB_SIZE, sizeof(struct kvmppc_slb));
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 360ce68..34044b1 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -267,12 +267,9 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
r = kvmppc_st(vcpu, &addr, 32, zeros, true);
if ((r == -ENOENT) || (r == -EPERM)) {
- struct kvmppc_book3s_shadow_vcpu *svcpu;
-
- svcpu = svcpu_get(vcpu);
*advance = 0;
vcpu->arch.shared->dar = vaddr;
- svcpu->fault_dar = vaddr;
+ vcpu->arch.fault_dar = vaddr;
dsisr = DSISR_ISSTORE;
if (r == -ENOENT)
@@ -281,8 +278,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
dsisr |= DSISR_PROTFAULT;
vcpu->arch.shared->dsisr = dsisr;
- svcpu->fault_dsisr = dsisr;
- svcpu_put(svcpu);
+ vcpu->arch.fault_dsisr = dsisr;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_DATA_STORAGE);
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index 17cfae5..c935195 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -26,8 +26,12 @@
#if defined(CONFIG_PPC_BOOK3S_64)
#define FUNC(name) GLUE(.,name)
+#define GET_SHADOW_VCPU(reg) mr reg, r13
+
#elif defined(CONFIG_PPC_BOOK3S_32)
#define FUNC(name) name
+#define GET_SHADOW_VCPU(reg) lwz reg, (THREAD + THREAD_KVM_SVCPU)(r2)
+
#endif /* CONFIG_PPC_BOOK3S_XX */
#define VCPU_LOAD_NVGPRS(vcpu) \
@@ -87,8 +91,49 @@ kvm_start_entry:
VCPU_LOAD_NVGPRS(r4)
kvm_start_lightweight:
+ /* Copy registers into shadow vcpu so we can access them in real mode */
+ GET_SHADOW_VCPU(r3)
+ PPC_LL r5, VCPU_GPR(R0)(r4)
+ PPC_LL r6, VCPU_GPR(R1)(r4)
+ PPC_LL r7, VCPU_GPR(R2)(r4)
+ PPC_LL r8, VCPU_GPR(R3)(r4)
+ PPC_STL r5, SVCPU_R0(r3)
+ PPC_STL r6, SVCPU_R1(r3)
+ PPC_STL r7, SVCPU_R2(r3)
+ PPC_STL r8, SVCPU_R3(r3)
+ PPC_LL r5, VCPU_GPR(R4)(r4)
+ PPC_LL r6, VCPU_GPR(R5)(r4)
+ PPC_LL r7, VCPU_GPR(R6)(r4)
+ PPC_LL r8, VCPU_GPR(R7)(r4)
+ PPC_STL r5, SVCPU_R4(r3)
+ PPC_STL r6, SVCPU_R5(r3)
+ PPC_STL r7, SVCPU_R6(r3)
+ PPC_STL r8, SVCPU_R7(r3)
+ PPC_LL r5, VCPU_GPR(R8)(r4)
+ PPC_LL r6, VCPU_GPR(R9)(r4)
+ PPC_LL r7, VCPU_GPR(R10)(r4)
+ PPC_LL r8, VCPU_GPR(R11)(r4)
+ PPC_STL r5, SVCPU_R8(r3)
+ PPC_STL r6, SVCPU_R9(r3)
+ PPC_STL r7, SVCPU_R10(r3)
+ PPC_STL r8, SVCPU_R11(r3)
+ PPC_LL r5, VCPU_GPR(R12)(r4)
+ PPC_LL r6, VCPU_GPR(R13)(r4)
+ lwz r7, VCPU_CR(r4)
+ PPC_LL r8, VCPU_XER(r4)
+ PPC_STL r5, SVCPU_R12(r3)
+ PPC_STL r6, SVCPU_R13(r3)
+ stw r7, SVCPU_CR(r3)
+ stw r8, SVCPU_XER(r3)
+ PPC_LL r5, VCPU_CTR(r4)
+ PPC_LL r6, VCPU_LR(r4)
+ PPC_LL r7, VCPU_PC(r4)
+ PPC_STL r5, SVCPU_CTR(r3)
+ PPC_STL r6, SVCPU_LR(r3)
+ PPC_STL r7, SVCPU_PC(r3)
#ifdef CONFIG_PPC_BOOK3S_64
+ /* Get the dcbz32 flag */
PPC_LL r3, VCPU_HFLAGS(r4)
rldicl r3, r3, 0, 63 /* r3 &= 1 */
stb r3, HSTATE_RESTORE_HID5(r13)
@@ -128,6 +173,61 @@ kvmppc_handler_highmem:
/* R7 = vcpu */
PPC_LL r7, GPR4(r1)
+ /* Transfer reg values from shadow vcpu back to vcpu struct */
+ /* On 64-bit, interrupts are still off at this point */
+ GET_SHADOW_VCPU(r4)
+ PPC_LL r5, SVCPU_R0(r4)
+ PPC_LL r6, SVCPU_R1(r4)
+ PPC_LL r3, SVCPU_R2(r4)
+ PPC_LL r8, SVCPU_R3(r4)
+ PPC_STL r5, VCPU_GPR(R0)(r7)
+ PPC_STL r6, VCPU_GPR(R1)(r7)
+ PPC_STL r3, VCPU_GPR(R2)(r7)
+ PPC_STL r8, VCPU_GPR(R3)(r7)
+ PPC_LL r5, SVCPU_R4(r4)
+ PPC_LL r6, SVCPU_R5(r4)
+ PPC_LL r3, SVCPU_R6(r4)
+ PPC_LL r8, SVCPU_R7(r4)
+ PPC_STL r5, VCPU_GPR(R4)(r7)
+ PPC_STL r6, VCPU_GPR(R5)(r7)
+ PPC_STL r3, VCPU_GPR(R6)(r7)
+ PPC_STL r8, VCPU_GPR(R7)(r7)
+ PPC_LL r5, SVCPU_R8(r4)
+ PPC_LL r6, SVCPU_R9(r4)
+ PPC_LL r3, SVCPU_R10(r4)
+ PPC_LL r8, SVCPU_R11(r4)
+ PPC_STL r5, VCPU_GPR(R8)(r7)
+ PPC_STL r6, VCPU_GPR(R9)(r7)
+ PPC_STL r3, VCPU_GPR(R10)(r7)
+ PPC_STL r8, VCPU_GPR(R11)(r7)
+ PPC_LL r5, SVCPU_R12(r4)
+ PPC_LL r6, SVCPU_R13(r4)
+ lwz r3, SVCPU_CR(r4)
+ lwz r8, SVCPU_XER(r4)
+ PPC_STL r5, VCPU_GPR(R12)(r7)
+ PPC_STL r6, VCPU_GPR(R13)(r7)
+ stw r3, VCPU_CR(r7)
+ PPC_STL r8, VCPU_XER(r7)
+ PPC_LL r5, SVCPU_CTR(r4)
+ PPC_LL r6, SVCPU_LR(r4)
+ PPC_LL r3, SVCPU_PC(r4)
+ PPC_LL r8, SVCPU_SHADOW_SRR1(r4)
+ PPC_STL r5, VCPU_CTR(r7)
+ PPC_STL r6, VCPU_LR(r7)
+ PPC_STL r3, VCPU_PC(r7)
+ PPC_STL r8, VCPU_SHADOW_SRR1(r7)
+ PPC_LL r5, SVCPU_FAULT_DAR(r4)
+ lwz r6, SVCPU_FAULT_DSISR(r4)
+ lwz r3, SVCPU_LAST_INST(r4)
+ PPC_STL r5, VCPU_FAULT_DAR(r7)
+ stw r6, VCPU_FAULT_DSISR(r7)
+ stw r3, VCPU_LAST_INST(r7)
+
+ /* Re-enable interrupts */
+ mfmsr r3
+ ori r3, r3, MSR_EE
+ MTMSR_EERI(r3)
+
#ifdef CONFIG_PPC_BOOK3S_64
/*
* Reload kernel SPRG3 value.
@@ -135,6 +235,7 @@ kvmppc_handler_highmem:
*/
ld r3, PACA_SPRG3(r13)
mtspr SPRN_SPRG3, r3
+
#endif /* CONFIG_PPC_BOOK3S_64 */
PPC_STL r14, VCPU_GPR(R14)(r7)
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index ddfaf56..5aa64e2 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -61,8 +61,6 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
#ifdef CONFIG_PPC_BOOK3S_64
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb));
- memcpy(&get_paca()->shadow_vcpu, to_book3s(vcpu)->shadow_vcpu,
- sizeof(get_paca()->shadow_vcpu));
svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max;
svcpu_put(svcpu);
#endif
@@ -77,8 +75,6 @@ void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
#ifdef CONFIG_PPC_BOOK3S_64
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb));
- memcpy(to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
- sizeof(get_paca()->shadow_vcpu));
to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max;
svcpu_put(svcpu);
#endif
@@ -388,22 +384,18 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
if (page_found == -ENOENT) {
/* Page not found in guest PTE entries */
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
vcpu->arch.shared->dar = kvmppc_get_fault_dar(vcpu);
- vcpu->arch.shared->dsisr = svcpu->fault_dsisr;
+ vcpu->arch.shared->dsisr = vcpu->arch.fault_dsisr;
vcpu->arch.shared->msr |=
- (svcpu->shadow_srr1 & 0x00000000f8000000ULL);
- svcpu_put(svcpu);
+ vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL;
kvmppc_book3s_queue_irqprio(vcpu, vec);
} else if (page_found == -EPERM) {
/* Storage protection */
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
vcpu->arch.shared->dar = kvmppc_get_fault_dar(vcpu);
- vcpu->arch.shared->dsisr = svcpu->fault_dsisr & ~DSISR_NOHPTE;
+ vcpu->arch.shared->dsisr = vcpu->arch.fault_dsisr & ~DSISR_NOHPTE;
vcpu->arch.shared->dsisr |= DSISR_PROTFAULT;
vcpu->arch.shared->msr |=
- svcpu->shadow_srr1 & 0x00000000f8000000ULL;
- svcpu_put(svcpu);
+ vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL;
kvmppc_book3s_queue_irqprio(vcpu, vec);
} else if (page_found == -EINVAL) {
/* Page not found in guest SLB */
@@ -623,21 +615,26 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
switch (exit_nr) {
case BOOK3S_INTERRUPT_INST_STORAGE:
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong shadow_srr1 = svcpu->shadow_srr1;
+ ulong shadow_srr1 = vcpu->arch.shadow_srr1;
vcpu->stat.pf_instruc++;
#ifdef CONFIG_PPC_BOOK3S_32
/* We set segments as unused segments when invalidating them. So
* treat the respective fault as segment fault. */
- if (svcpu->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT] == SR_INVALID) {
- kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
- r = RESUME_GUEST;
+ {
+ struct kvmppc_book3s_shadow_vcpu *svcpu;
+ u32 sr;
+
+ svcpu = svcpu_get(vcpu);
+ sr = svcpu->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT];
svcpu_put(svcpu);
- break;
+ if (sr == SR_INVALID) {
+ kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
+ r = RESUME_GUEST;
+ break;
+ }
}
#endif
- svcpu_put(svcpu);
/* only care about PTEG not found errors, but leave NX alone */
if (shadow_srr1 & 0x40000000) {
@@ -662,21 +659,26 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
case BOOK3S_INTERRUPT_DATA_STORAGE:
{
ulong dar = kvmppc_get_fault_dar(vcpu);
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- u32 fault_dsisr = svcpu->fault_dsisr;
+ u32 fault_dsisr = vcpu->arch.fault_dsisr;
vcpu->stat.pf_storage++;
#ifdef CONFIG_PPC_BOOK3S_32
/* We set segments as unused segments when invalidating them. So
* treat the respective fault as segment fault. */
- if ((svcpu->sr[dar >> SID_SHIFT]) == SR_INVALID) {
- kvmppc_mmu_map_segment(vcpu, dar);
- r = RESUME_GUEST;
+ {
+ struct kvmppc_book3s_shadow_vcpu *svcpu;
+ u32 sr;
+
+ svcpu = svcpu_get(vcpu);
+ sr = svcpu->sr[dar >> SID_SHIFT];
svcpu_put(svcpu);
- break;
+ if (sr == SR_INVALID) {
+ kvmppc_mmu_map_segment(vcpu, dar);
+ r = RESUME_GUEST;
+ break;
+ }
}
#endif
- svcpu_put(svcpu);
/* The only case we need to handle is missing shadow PTEs */
if (fault_dsisr & DSISR_NOHPTE) {
@@ -723,13 +725,10 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
{
enum emulation_result er;
- struct kvmppc_book3s_shadow_vcpu *svcpu;
ulong flags;
program_interrupt:
- svcpu = svcpu_get(vcpu);
- flags = svcpu->shadow_srr1 & 0x1f0000ull;
- svcpu_put(svcpu);
+ flags = vcpu->arch.shadow_srr1 & 0x1f0000ull;
if (vcpu->arch.shared->msr & MSR_PR) {
#ifdef EXIT_DEBUG
@@ -861,9 +860,7 @@ program_interrupt:
break;
default:
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong shadow_srr1 = svcpu->shadow_srr1;
- svcpu_put(svcpu);
+ ulong shadow_srr1 = vcpu->arch.shadow_srr1;
/* Ugh - bork here! What did we get? */
printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | msr=0x%lx\n",
exit_nr, kvmppc_get_pc(vcpu), shadow_srr1);
@@ -1037,11 +1034,12 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
if (!vcpu_book3s)
goto out;
+#ifdef CONFIG_KVM_BOOK3S_32
vcpu_book3s->shadow_vcpu =
kzalloc(sizeof(*vcpu_book3s->shadow_vcpu), GFP_KERNEL);
if (!vcpu_book3s->shadow_vcpu)
goto free_vcpu;
-
+#endif
vcpu = &vcpu_book3s->vcpu;
err = kvm_vcpu_init(vcpu, kvm, id);
if (err)
@@ -1074,8 +1072,10 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
uninit_vcpu:
kvm_vcpu_uninit(vcpu);
free_shadow_vcpu:
+#ifdef CONFIG_KVM_BOOK3S_32
kfree(vcpu_book3s->shadow_vcpu);
free_vcpu:
+#endif
vfree(vcpu_book3s);
out:
return ERR_PTR(err);
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index 8f7633e..b64d7f9 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -179,11 +179,6 @@ _GLOBAL(kvmppc_entry_trampoline)
li r6, MSR_IR | MSR_DR
andc r6, r5, r6 /* Clear DR and IR in MSR value */
- /*
- * Set EE in HOST_MSR so that it's enabled when we get into our
- * C exit handler function
- */
- ori r5, r5, MSR_EE
mtsrr0 r7
mtsrr1 r6
RFI
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index e326489..a088e9a 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -101,17 +101,12 @@ TRACE_EVENT(kvm_exit,
),
TP_fast_assign(
-#ifdef CONFIG_KVM_BOOK3S_PR
- struct kvmppc_book3s_shadow_vcpu *svcpu;
-#endif
__entry->exit_nr = exit_nr;
__entry->pc = kvmppc_get_pc(vcpu);
__entry->dar = kvmppc_get_fault_dar(vcpu);
__entry->msr = vcpu->arch.shared->msr;
#ifdef CONFIG_KVM_BOOK3S_PR
- svcpu = svcpu_get(vcpu);
- __entry->srr1 = svcpu->shadow_srr1;
- svcpu_put(svcpu);
+ __entry->srr1 = vcpu->arch.shadow_srr1;
#endif
__entry->last_inst = vcpu->arch.last_inst;
),
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/8] KVM: PPC: Book3S PR: Rework kvmppc_mmu_book3s_64_xlate()
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
2013-07-11 11:49 ` [PATCH 1/8] KVM: PPC: Book3S PR: Load up SPRG3 register with guest value on guest entry Paul Mackerras
2013-07-11 11:50 ` [PATCH 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu Paul Mackerras
@ 2013-07-11 11:51 ` Paul Mackerras
2013-07-11 11:52 ` [PATCH 4/8] KVM: PPC: Book3S PR: Allow guest to use 64k pages Paul Mackerras
` (4 subsequent siblings)
7 siblings, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:51 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
This reworks kvmppc_mmu_book3s_64_xlate() to make it check the large
page bit in the hashed page table entries (HPTEs) it looks at, and
to simplify and streamline the code. The checking of the first dword
of each HPTE is now done with a single mask and compare operation,
and all the code dealing with the matching HPTE, if we find one,
is consolidated in one place in the main line of the function flow.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kvm/book3s_64_mmu.c | 150 +++++++++++++++++++--------------------
1 file changed, 72 insertions(+), 78 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 739bfba..7e345e0 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -182,10 +182,13 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
hva_t ptegp;
u64 pteg[16];
u64 avpn = 0;
+ u64 v, r;
+ u64 v_val, v_mask;
+ u64 eaddr_mask;
int i;
- u8 key = 0;
+ u8 pp, key = 0;
bool found = false;
- int second = 0;
+ bool second = false;
ulong mp_ea = vcpu->arch.magic_page_ea;
/* Magic page override */
@@ -208,8 +211,16 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
goto no_seg_found;
avpn = kvmppc_mmu_book3s_64_get_avpn(slbe, eaddr);
+ v_val = avpn & HPTE_V_AVPN;
+
if (slbe->tb)
- avpn |= SLB_VSID_B_1T;
+ v_val |= SLB_VSID_B_1T;
+ if (slbe->large)
+ v_val |= HPTE_V_LARGE;
+ v_val |= HPTE_V_VALID;
+
+ v_mask = SLB_VSID_B | HPTE_V_AVPN | HPTE_V_LARGE | HPTE_V_VALID |
+ HPTE_V_SECONDARY;
do_second:
ptegp = kvmppc_mmu_book3s_64_get_pteg(vcpu_book3s, slbe, eaddr, second);
@@ -227,91 +238,74 @@ do_second:
key = 4;
for (i=0; i<16; i+=2) {
- u64 v = pteg[i];
- u64 r = pteg[i+1];
-
- /* Valid check */
- if (!(v & HPTE_V_VALID))
- continue;
- /* Hash check */
- if ((v & HPTE_V_SECONDARY) != second)
- continue;
-
- /* AVPN compare */
- if (HPTE_V_COMPARE(avpn, v)) {
- u8 pp = (r & HPTE_R_PP) | key;
- int eaddr_mask = 0xFFF;
-
- gpte->eaddr = eaddr;
- gpte->vpage = kvmppc_mmu_book3s_64_ea_to_vp(vcpu,
- eaddr,
- data);
- if (slbe->large)
- eaddr_mask = 0xFFFFFF;
- gpte->raddr = (r & HPTE_R_RPN) | (eaddr & eaddr_mask);
- gpte->may_execute = ((r & HPTE_R_N) ? false : true);
- gpte->may_read = false;
- gpte->may_write = false;
-
- switch (pp) {
- case 0:
- case 1:
- case 2:
- case 6:
- gpte->may_write = true;
- /* fall through */
- case 3:
- case 5:
- case 7:
- gpte->may_read = true;
- break;
- }
-
- dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
- "-> 0x%lx\n",
- eaddr, avpn, gpte->vpage, gpte->raddr);
+ /* Check all relevant fields of 1st dword */
+ if ((pteg[i] & v_mask) == v_val) {
found = true;
break;
}
}
- /* Update PTE R and C bits, so the guest's swapper knows we used the
- * page */
- if (found) {
- u32 oldr = pteg[i+1];
+ if (!found) {
+ if (second)
+ goto no_page_found;
+ v_val |= HPTE_V_SECONDARY;
+ second = true;
+ goto do_second;
+ }
- if (gpte->may_read) {
- /* Set the accessed flag */
- pteg[i+1] |= HPTE_R_R;
- }
- if (gpte->may_write) {
- /* Set the dirty flag */
- pteg[i+1] |= HPTE_R_C;
- } else {
- dprintk("KVM: Mapping read-only page!\n");
- }
+ v = pteg[i];
+ r = pteg[i+1];
+ pp = (r & HPTE_R_PP) | key;
+ eaddr_mask = 0xFFF;
+
+ gpte->eaddr = eaddr;
+ gpte->vpage = kvmppc_mmu_book3s_64_ea_to_vp(vcpu, eaddr, data);
+ if (slbe->large)
+ eaddr_mask = 0xFFFFFF;
+ gpte->raddr = (r & HPTE_R_RPN & ~eaddr_mask) | (eaddr & eaddr_mask);
+ gpte->may_execute = ((r & HPTE_R_N) ? false : true);
+ gpte->may_read = false;
+ gpte->may_write = false;
+
+ switch (pp) {
+ case 0:
+ case 1:
+ case 2:
+ case 6:
+ gpte->may_write = true;
+ /* fall through */
+ case 3:
+ case 5:
+ case 7:
+ gpte->may_read = true;
+ break;
+ }
- /* Write back into the PTEG */
- if (pteg[i+1] != oldr)
- copy_to_user((void __user *)ptegp, pteg, sizeof(pteg));
+ dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
+ "-> 0x%lx\n",
+ eaddr, avpn, gpte->vpage, gpte->raddr);
- if (!gpte->may_read)
- return -EPERM;
- return 0;
- } else {
- dprintk("KVM MMU: No PTE found (ea=0x%lx sdr1=0x%llx "
- "ptegp=0x%lx)\n",
- eaddr, to_book3s(vcpu)->sdr1, ptegp);
- for (i = 0; i < 16; i += 2)
- dprintk(" %02d: 0x%llx - 0x%llx (0x%llx)\n",
- i, pteg[i], pteg[i+1], avpn);
-
- if (!second) {
- second = HPTE_V_SECONDARY;
- goto do_second;
- }
+ /* Update PTE R and C bits, so the guest's swapper knows we used the
+ * page */
+ if (gpte->may_read) {
+ /* Set the accessed flag */
+ r |= HPTE_R_R;
+ }
+ if (data && gpte->may_write) {
+ /* Set the dirty flag -- XXX even if not writing */
+ r |= HPTE_R_C;
+ }
+
+ /* Write back into the PTEG */
+ if (pteg[i+1] != r) {
+ pteg[i+1] = r;
+ copy_to_user((void __user *)ptegp, pteg, sizeof(pteg));
}
+ if (!gpte->may_read)
+ return -EPERM;
+ return 0;
+
no_page_found:
return -ENOENT;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 4/8] KVM: PPC: Book3S PR: Allow guest to use 64k pages
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
` (2 preceding siblings ...)
2013-07-11 11:51 ` [PATCH 3/8] KVM: PPC: Book3S PR: Rework kvmppc_mmu_book3s_64_xlate() Paul Mackerras
@ 2013-07-11 11:52 ` Paul Mackerras
2013-07-11 11:53 ` [PATCH 5/8] KVM: PPC: Book3S PR: Use 64k host pages where possible Paul Mackerras
` (3 subsequent siblings)
7 siblings, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:52 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
This adds the code to interpret 64k HPTEs in the guest hashed page
table (HPT), 64k SLB entries, and to tell the guest about 64k pages
in kvm_vm_ioctl_get_smmu_info(). Guest 64k pages are still shadowed
by 4k pages.
This also adds another hash table to the four we have already in
book3s_mmu_hpte.c to allow us to find all the PTEs that we have
instantiated that match a given 64k guest page.
The tlbie instruction changed starting with POWER6 to use a bit in
the RB operand to indicate large page invalidations, and to use other
RB bits to indicate the base and actual page sizes and the segment
size. 64k pages came in slightly earlier, with POWER5++. At present
we use one bit in vcpu->arch.hflags to indicate that the emulated
cpu supports 64k pages and also has the new tlbie definition. If
we ever want to support emulation of POWER5++, we will need to use
another bit.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/include/asm/kvm_asm.h | 1 +
arch/powerpc/include/asm/kvm_book3s.h | 6 +++
arch/powerpc/include/asm/kvm_host.h | 4 ++
arch/powerpc/kvm/book3s_64_mmu.c | 92 +++++++++++++++++++++++++++++++----
arch/powerpc/kvm/book3s_mmu_hpte.c | 50 +++++++++++++++++++
arch/powerpc/kvm/book3s_pr.c | 30 +++++++++++-
6 files changed, 173 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h
index 851bac7..3d70b7e 100644
--- a/arch/powerpc/include/asm/kvm_asm.h
+++ b/arch/powerpc/include/asm/kvm_asm.h
@@ -123,6 +123,7 @@
#define BOOK3S_HFLAG_SLB 0x2
#define BOOK3S_HFLAG_PAIRED_SINGLE 0x4
#define BOOK3S_HFLAG_NATIVE_PS 0x8
+#define BOOK3S_HFLAG_MULTI_PGSIZE 0x10
#define RESUME_FLAG_NV (1<<0) /* Reload guest nonvolatile state? */
#define RESUME_FLAG_HOST (1<<1) /* Resume host? */
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 5d68f6c..eacf6e7 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -58,6 +58,9 @@ struct hpte_cache {
struct hlist_node list_pte_long;
struct hlist_node list_vpte;
struct hlist_node list_vpte_long;
+#ifdef CONFIG_PPC_BOOK3S_64
+ struct hlist_node list_vpte_64k;
+#endif
struct rcu_head rcu_head;
u64 host_vpn;
u64 pfn;
@@ -99,6 +102,9 @@ struct kvmppc_vcpu_book3s {
struct hlist_head hpte_hash_pte_long[HPTEG_HASH_NUM_PTE_LONG];
struct hlist_head hpte_hash_vpte[HPTEG_HASH_NUM_VPTE];
struct hlist_head hpte_hash_vpte_long[HPTEG_HASH_NUM_VPTE_LONG];
+#ifdef CONFIG_PPC_BOOK3S_64
+ struct hlist_head hpte_hash_vpte_64k[HPTEG_HASH_NUM_VPTE_64K];
+#endif
int hpte_cache_count;
spinlock_t mmu_lock;
};
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 7b26395..2d3c770 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -73,10 +73,12 @@ extern void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
#define HPTEG_HASH_BITS_PTE_LONG 12
#define HPTEG_HASH_BITS_VPTE 13
#define HPTEG_HASH_BITS_VPTE_LONG 5
+#define HPTEG_HASH_BITS_VPTE_64K 11
#define HPTEG_HASH_NUM_PTE (1 << HPTEG_HASH_BITS_PTE)
#define HPTEG_HASH_NUM_PTE_LONG (1 << HPTEG_HASH_BITS_PTE_LONG)
#define HPTEG_HASH_NUM_VPTE (1 << HPTEG_HASH_BITS_VPTE)
#define HPTEG_HASH_NUM_VPTE_LONG (1 << HPTEG_HASH_BITS_VPTE_LONG)
+#define HPTEG_HASH_NUM_VPTE_64K (1 << HPTEG_HASH_BITS_VPTE_64K)
/* Physical Address Mask - allowed range of real mode RAM access */
#define KVM_PAM 0x0fffffffffffffffULL
@@ -328,6 +330,7 @@ struct kvmppc_pte {
bool may_read : 1;
bool may_write : 1;
bool may_execute : 1;
+ u8 page_size; /* MMU_PAGE_xxx */
};
struct kvmppc_mmu {
@@ -360,6 +363,7 @@ struct kvmppc_slb {
bool large : 1; /* PTEs are 16MB */
bool tb : 1; /* 1TB segment */
bool class : 1;
+ u8 base_page_size; /* MMU_PAGE_xxx */
};
# ifdef CONFIG_PPC_FSL_BOOK3E
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 7e345e0..d5fa26c 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -107,9 +107,20 @@ static u64 kvmppc_mmu_book3s_64_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
return kvmppc_slb_calc_vpn(slb, eaddr);
}
+static int mmu_pagesize(int mmu_pg)
+{
+ switch (mmu_pg) {
+ case MMU_PAGE_64K:
+ return 16;
+ case MMU_PAGE_16M:
+ return 24;
+ }
+ return 12;
+}
+
static int kvmppc_mmu_book3s_64_get_pagesize(struct kvmppc_slb *slbe)
{
- return slbe->large ? 24 : 12;
+ return mmu_pagesize(slbe->base_page_size);
}
static u32 kvmppc_mmu_book3s_64_get_page(struct kvmppc_slb *slbe, gva_t eaddr)
@@ -166,14 +177,34 @@ static u64 kvmppc_mmu_book3s_64_get_avpn(struct kvmppc_slb *slbe, gva_t eaddr)
avpn = kvmppc_mmu_book3s_64_get_page(slbe, eaddr);
avpn |= slbe->vsid << (kvmppc_slb_sid_shift(slbe) - p);
- if (p < 24)
- avpn >>= ((80 - p) - 56) - 8;
+ if (p < 16)
+ avpn >>= ((80 - p) - 56) - 8; /* 16 - p */
else
- avpn <<= 8;
+ avpn <<= p - 16;
return avpn;
}
+/*
+ * Return page size encoded in the second word of a HPTE, or
+ * -1 for an invalid encoding for the base page size indicated by
+ * the SLB entry. This doesn't handle mixed pagesize segments yet.
+ */
+static int decode_pagesize(struct kvmppc_slb *slbe, u64 r)
+{
+ switch (slbe->base_page_size) {
+ case MMU_PAGE_64K:
+ if ((r & 0xf000) == 0x1000)
+ return MMU_PAGE_64K;
+ break;
+ case MMU_PAGE_16M:
+ if ((r & 0xff000) == 0)
+ return MMU_PAGE_16M;
+ break;
+ }
+ return -1;
+}
+
static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
struct kvmppc_pte *gpte, bool data)
{
@@ -189,6 +220,7 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
u8 pp, key = 0;
bool found = false;
bool second = false;
+ int pgsize;
ulong mp_ea = vcpu->arch.magic_page_ea;
/* Magic page override */
@@ -202,6 +234,7 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
gpte->may_execute = true;
gpte->may_read = true;
gpte->may_write = true;
+ gpte->page_size = MMU_PAGE_4K;
return 0;
}
@@ -222,6 +255,8 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
v_mask = SLB_VSID_B | HPTE_V_AVPN | HPTE_V_LARGE | HPTE_V_VALID |
HPTE_V_SECONDARY;
+ pgsize = slbe->large ? MMU_PAGE_16M : MMU_PAGE_4K;
+
do_second:
ptegp = kvmppc_mmu_book3s_64_get_pteg(vcpu_book3s, slbe, eaddr, second);
if (kvm_is_error_hva(ptegp))
@@ -240,6 +275,13 @@ do_second:
for (i=0; i<16; i+=2) {
/* Check all relevant fields of 1st dword */
if ((pteg[i] & v_mask) == v_val) {
+ /* If large page bit is set, check pgsize encoding */
+ if (slbe->large &&
+ (vcpu->arch.hflags & BOOK3S_HFLAG_MULTI_PGSIZE)) {
+ pgsize = decode_pagesize(slbe, pteg[i+1]);
+ if (pgsize < 0)
+ continue;
+ }
found = true;
break;
}
@@ -256,13 +298,13 @@ do_second:
v = pteg[i];
r = pteg[i+1];
pp = (r & HPTE_R_PP) | key;
- eaddr_mask = 0xFFF;
gpte->eaddr = eaddr;
gpte->vpage = kvmppc_mmu_book3s_64_ea_to_vp(vcpu, eaddr, data);
- if (slbe->large)
- eaddr_mask = 0xFFFFFF;
+
+ eaddr_mask = (1ull << mmu_pagesize(pgsize)) - 1;
gpte->raddr = (r & HPTE_R_RPN & ~eaddr_mask) | (eaddr & eaddr_mask);
+ gpte->page_size = pgsize;
gpte->may_execute = ((r & HPTE_R_N) ? false : true);
gpte->may_read = false;
gpte->may_write = false;
@@ -345,6 +387,21 @@ static void kvmppc_mmu_book3s_64_slbmte(struct kvm_vcpu *vcpu, u64 rs, u64 rb)
slbe->nx = (rs & SLB_VSID_N) ? 1 : 0;
slbe->class = (rs & SLB_VSID_C) ? 1 : 0;
+ slbe->base_page_size = MMU_PAGE_4K;
+ if (slbe->large) {
+ if (vcpu->arch.hflags & BOOK3S_HFLAG_MULTI_PGSIZE) {
+ switch (rs & SLB_VSID_LP) {
+ case SLB_VSID_LP_00:
+ slbe->base_page_size = MMU_PAGE_16M;
+ break;
+ case SLB_VSID_LP_01:
+ slbe->base_page_size = MMU_PAGE_64K;
+ break;
+ }
+ } else
+ slbe->base_page_size = MMU_PAGE_16M;
+ }
+
slbe->orige = rb & (ESID_MASK | SLB_ESID_V);
slbe->origv = rs;
@@ -463,8 +520,25 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
dprintk("KVM MMU: tlbie(0x%lx)\n", va);
- if (large)
- mask = 0xFFFFFF000ULL;
+ /*
+ * The tlbie instruction changed behaviour starting with
+ * POWER6. POWER6 and later don't have the large page flag
+ * in the instruction but in the RB value, along with bits
+ * indicating page and segment sizes.
+ */
+ if (vcpu->arch.hflags & BOOK3S_HFLAG_MULTI_PGSIZE) {
+ /* POWER6 or later */
+ if (va & 1) { /* L bit */
+ if ((va & 0xf000) == 0x1000)
+ mask = 0xFFFFFFFF0ULL; /* 64k page */
+ else
+ mask = 0xFFFFFF000ULL; /* 16M page */
+ }
+ } else {
+ /* older processors, e.g. PPC970 */
+ if (large)
+ mask = 0xFFFFFF000ULL;
+ }
kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask);
}
diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index da8b13c..d2d280b 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -56,6 +56,14 @@ static inline u64 kvmppc_mmu_hash_vpte_long(u64 vpage)
HPTEG_HASH_BITS_VPTE_LONG);
}
+#ifdef CONFIG_PPC_BOOK3S_64
+static inline u64 kvmppc_mmu_hash_vpte_64k(u64 vpage)
+{
+ return hash_64((vpage & 0xffffffff0ULL) >> 4,
+ HPTEG_HASH_BITS_VPTE_64K);
+}
+#endif
+
void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{
u64 index;
@@ -83,6 +91,13 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
hlist_add_head_rcu(&pte->list_vpte_long,
&vcpu3s->hpte_hash_vpte_long[index]);
+#ifdef CONFIG_PPC_BOOK3S_64
+ /* Add to vPTE_64k list */
+ index = kvmppc_mmu_hash_vpte_64k(pte->pte.vpage);
+ hlist_add_head_rcu(&pte->list_vpte_64k,
+ &vcpu3s->hpte_hash_vpte_64k[index]);
+#endif
+
spin_unlock(&vcpu3s->mmu_lock);
}
@@ -113,6 +128,9 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
hlist_del_init_rcu(&pte->list_pte_long);
hlist_del_init_rcu(&pte->list_vpte);
hlist_del_init_rcu(&pte->list_vpte_long);
+#ifdef CONFIG_PPC_BOOK3S_64
+ hlist_del_init_rcu(&pte->list_vpte_64k);
+#endif
spin_unlock(&vcpu3s->mmu_lock);
@@ -219,6 +237,29 @@ static void kvmppc_mmu_pte_vflush_short(struct kvm_vcpu *vcpu, u64 guest_vp)
rcu_read_unlock();
}
+#ifdef CONFIG_PPC_BOOK3S_64
+/* Flush with mask 0xffffffff0 */
+static void kvmppc_mmu_pte_vflush_64k(struct kvm_vcpu *vcpu, u64 guest_vp)
+{
+ struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
+ struct hlist_head *list;
+ struct hpte_cache *pte;
+ u64 vp_mask = 0xffffffff0ULL;
+
+ list = &vcpu3s->hpte_hash_vpte_64k[
+ kvmppc_mmu_hash_vpte_64k(guest_vp)];
+
+ rcu_read_lock();
+
+ /* Check the list for matching entries and invalidate */
+ hlist_for_each_entry_rcu(pte, list, list_vpte_64k)
+ if ((pte->pte.vpage & vp_mask) == guest_vp)
+ invalidate_pte(vcpu, pte);
+
+ rcu_read_unlock();
+}
+#endif
+
/* Flush with mask 0xffffff000 */
static void kvmppc_mmu_pte_vflush_long(struct kvm_vcpu *vcpu, u64 guest_vp)
{
@@ -249,6 +290,11 @@ void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
case 0xfffffffffULL:
kvmppc_mmu_pte_vflush_short(vcpu, guest_vp);
break;
+#ifdef CONFIG_PPC_BOOK3S_64
+ case 0xffffffff0ULL:
+ kvmppc_mmu_pte_vflush_64k(vcpu, guest_vp);
+ break;
+#endif
case 0xffffff000ULL:
kvmppc_mmu_pte_vflush_long(vcpu, guest_vp);
break;
@@ -320,6 +366,10 @@ int kvmppc_mmu_hpte_init(struct kvm_vcpu *vcpu)
ARRAY_SIZE(vcpu3s->hpte_hash_vpte));
kvmppc_mmu_hpte_init_hash(vcpu3s->hpte_hash_vpte_long,
ARRAY_SIZE(vcpu3s->hpte_hash_vpte_long));
+#ifdef CONFIG_PPC_BOOK3S_64
+ kvmppc_mmu_hpte_init_hash(vcpu3s->hpte_hash_vpte_64k,
+ ARRAY_SIZE(vcpu3s->hpte_hash_vpte_64k));
+#endif
spin_lock_init(&vcpu3s->mmu_lock);
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 5aa64e2..c465db5 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -252,6 +252,23 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
if (!strcmp(cur_cpu_spec->platform, "ppc-cell-be"))
to_book3s(vcpu)->msr_mask &= ~(MSR_FE0 | MSR_FE1);
+ /*
+ * If they're asking for POWER6 or later, set the flag
+ * indicating that we can do multiple large page sizes.
+ * We also take this to mean that tlbie has the large page
+ * bit in the RB operand instead of the instruction and
+ * that the CPU can do 1TB segments. If we ever wanted
+ * to emulate POWER5++ we would need to separate these things.
+ */
+ switch (PVR_VER(pvr)) {
+ case PVR_POWER6:
+ case PVR_POWER7:
+ case PVR_POWER7p:
+ case PVR_POWER8:
+ vcpu->arch.hflags |= BOOK3S_HFLAG_MULTI_PGSIZE;
+ break;
+ }
+
#ifdef CONFIG_PPC_BOOK3S_32
/* 32 bit Book3S always has 32 byte dcbz */
vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
@@ -1052,8 +1069,13 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
goto uninit_vcpu;
#ifdef CONFIG_PPC_BOOK3S_64
- /* default to book3s_64 (970fx) */
+ /*
+ * Default to the same as the host if we're on a POWER7[+],
+ * otherwise default to PPC970FX.
+ */
vcpu->arch.pvr = 0x3C0301;
+ if (cpu_has_feature(CPU_FTR_ARCH_206))
+ vcpu->arch.pvr = mfspr(SPRN_PVR);
#else
/* default to book3s_32 (750) */
vcpu->arch.pvr = 0x84202;
@@ -1256,6 +1278,12 @@ int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm, struct kvm_ppc_smmu_info *info)
info->sps[1].enc[0].page_shift = 24;
info->sps[1].enc[0].pte_enc = 0;
+ /* 64k large page size */
+ info->sps[2].page_shift = 16;
+ info->sps[2].slb_enc = SLB_VSID_L | SLB_VSID_LP_01;
+ info->sps[2].enc[0].page_shift = 16;
+ info->sps[2].enc[0].pte_enc = 1;
+
return 0;
}
#endif /* CONFIG_PPC64 */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 5/8] KVM: PPC: Book3S PR: Use 64k host pages where possible
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
` (3 preceding siblings ...)
2013-07-11 11:52 ` [PATCH 4/8] KVM: PPC: Book3S PR: Allow guest to use 64k pages Paul Mackerras
@ 2013-07-11 11:53 ` Paul Mackerras
2013-07-11 11:53 ` [PATCH 6/8] KVM: PPC: Book3S PR: Handle PP0 page-protection bit in guest HPTEs Paul Mackerras
` (2 subsequent siblings)
7 siblings, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:53 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
Currently, PR KVM uses 4k pages for the host-side mappings of guest
memory, regardless of the host page size. When the host page size is
64kB, we might as well use 64k host page mappings for guest mappings
of 64kB and larger pages and for guest real-mode mappings. However,
the magic page has to remain a 4k page.
To implement this, we first add another flag bit to the guest VSID
values we use, to indicate that this segment is one where host pages
should be mapped using 64k pages. For segments with this bit set
we set the bits in the shadow SLB entry to indicate a 64k base page
size. When faulting in host HPTEs for this segment, we make them
64k HPTEs instead of 4k. We record the pagesize in struct hpte_cache
for use when invalidating the HPTE.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/include/asm/kvm_book3s.h | 6 ++++--
arch/powerpc/kvm/book3s_32_mmu.c | 1 +
arch/powerpc/kvm/book3s_64_mmu.c | 35 ++++++++++++++++++++++++++++++-----
arch/powerpc/kvm/book3s_64_mmu_host.c | 27 +++++++++++++++++++++------
arch/powerpc/kvm/book3s_pr.c | 1 +
5 files changed, 57 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index eacf6e7..f612217 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -66,6 +66,7 @@ struct hpte_cache {
u64 pfn;
ulong slot;
struct kvmppc_pte pte;
+ int pagesize;
};
struct kvmppc_vcpu_book3s {
@@ -113,8 +114,9 @@ struct kvmppc_vcpu_book3s {
#define CONTEXT_GUEST 1
#define CONTEXT_GUEST_END 2
-#define VSID_REAL 0x0fffffffffc00000ULL
-#define VSID_BAT 0x0fffffffffb00000ULL
+#define VSID_REAL 0x07ffffffffc00000ULL
+#define VSID_BAT 0x07ffffffffb00000ULL
+#define VSID_64K 0x0800000000000000ULL
#define VSID_1T 0x1000000000000000ULL
#define VSID_REAL_DR 0x2000000000000000ULL
#define VSID_REAL_IR 0x4000000000000000ULL
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index c8cefdd..af04553 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -308,6 +308,7 @@ static int kvmppc_mmu_book3s_32_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
ulong mp_ea = vcpu->arch.magic_page_ea;
pte->eaddr = eaddr;
+ pte->page_size = MMU_PAGE_4K;
/* Magic page override */
if (unlikely(mp_ea) &&
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index d5fa26c..658ccd7 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -542,6 +542,16 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask);
}
+#ifdef CONFIG_PPC_64K_PAGES
+static int segment_contains_magic_page(struct kvm_vcpu *vcpu, ulong esid)
+{
+ ulong mp_ea = vcpu->arch.magic_page_ea;
+
+ return mp_ea && !(vcpu->arch.shared->msr & MSR_PR) &&
+ (mp_ea >> SID_SHIFT) == esid;
+}
+#endif
+
static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
u64 *vsid)
{
@@ -549,11 +559,13 @@ static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
struct kvmppc_slb *slb;
u64 gvsid = esid;
ulong mp_ea = vcpu->arch.magic_page_ea;
+ int pagesize = MMU_PAGE_64K;
if (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
slb = kvmppc_mmu_book3s_64_find_slbe(vcpu, ea);
if (slb) {
gvsid = slb->vsid;
+ pagesize = slb->base_page_size;
if (slb->tb) {
gvsid <<= SID_SHIFT_1T - SID_SHIFT;
gvsid |= esid & ((1ul << (SID_SHIFT_1T - SID_SHIFT)) - 1);
@@ -564,28 +576,41 @@ static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
switch (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
case 0:
- *vsid = VSID_REAL | esid;
+ gvsid = VSID_REAL | esid;
break;
case MSR_IR:
- *vsid = VSID_REAL_IR | gvsid;
+ gvsid |= VSID_REAL_IR;
break;
case MSR_DR:
- *vsid = VSID_REAL_DR | gvsid;
+ gvsid |= VSID_REAL_DR;
break;
case MSR_DR|MSR_IR:
if (!slb)
goto no_slb;
- *vsid = gvsid;
break;
default:
BUG();
break;
}
+#ifdef CONFIG_PPC_64K_PAGES
+ /*
+ * Mark this as a 64k segment if the host is using
+ * 64k pages, the host MMU supports 64k pages and
+ * the guest segment page size is >= 64k,
+ * but not if this segment contains the magic page.
+ */
+ if (pagesize >= MMU_PAGE_64K &&
+ mmu_psize_defs[MMU_PAGE_64K].shift &&
+ !segment_contains_magic_page(vcpu, esid))
+ gvsid |= VSID_64K;
+#endif
+
if (vcpu->arch.shared->msr & MSR_PR)
- *vsid |= VSID_PR;
+ gvsid |= VSID_PR;
+ *vsid = gvsid;
return 0;
no_slb:
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index b350d94..21a51e8 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -34,7 +34,7 @@
void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{
ppc_md.hpte_invalidate(pte->slot, pte->host_vpn,
- MMU_PAGE_4K, MMU_SEGSIZE_256M,
+ pte->pagesize, MMU_SEGSIZE_256M,
false);
}
@@ -90,6 +90,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
int attempt = 0;
struct kvmppc_sid_map *map;
int r = 0;
+ int hpsize = MMU_PAGE_4K;
/* Get host physical address for gpa */
hpaddr = kvmppc_gfn_to_pfn(vcpu, orig_pte->raddr >> PAGE_SHIFT);
@@ -99,7 +100,6 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
goto out;
}
hpaddr <<= PAGE_SHIFT;
- hpaddr |= orig_pte->raddr & (~0xfffULL & ~PAGE_MASK);
/* and write the mapping ea -> hpa into the pt */
vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid);
@@ -117,8 +117,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
goto out;
}
- vsid = map->host_vsid;
- vpn = hpt_vpn(orig_pte->eaddr, vsid, MMU_SEGSIZE_256M);
+ vpn = hpt_vpn(orig_pte->eaddr, map->host_vsid, MMU_SEGSIZE_256M);
if (!orig_pte->may_write)
rflags |= HPTE_R_PP;
@@ -130,7 +129,16 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
else
kvmppc_mmu_flush_icache(hpaddr >> PAGE_SHIFT);
- hash = hpt_hash(vpn, PTE_SIZE, MMU_SEGSIZE_256M);
+ /*
+ * Use 64K pages if possible; otherwise, on 64K page kernels,
+ * we need to transfer 4 more bits from guest real to host real addr.
+ */
+ if (vsid & VSID_64K)
+ hpsize = MMU_PAGE_64K;
+ else
+ hpaddr |= orig_pte->raddr & (~0xfffULL & ~PAGE_MASK);
+
+ hash = hpt_hash(vpn, mmu_psize_defs[hpsize].shift, MMU_SEGSIZE_256M);
map_again:
hpteg = ((hash & htab_hash_mask) * HPTES_PER_GROUP);
@@ -143,7 +151,7 @@ map_again:
}
ret = ppc_md.hpte_insert(hpteg, vpn, hpaddr, rflags, vflags,
- MMU_PAGE_4K, MMU_PAGE_4K, MMU_SEGSIZE_256M);
+ hpsize, hpsize, MMU_SEGSIZE_256M);
if (ret < 0) {
/* If we couldn't map a primary PTE, try a secondary */
@@ -168,6 +176,7 @@ map_again:
pte->host_vpn = vpn;
pte->pte = *orig_pte;
pte->pfn = hpaddr >> PAGE_SHIFT;
+ pte->pagesize = hpsize;
kvmppc_mmu_hpte_cache_map(vcpu, pte);
}
@@ -291,6 +300,12 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
slb_vsid &= ~SLB_VSID_KP;
slb_esid |= slb_index;
+#ifdef CONFIG_PPC_64K_PAGES
+ /* Set host segment base page size to 64K if possible */
+ if (gvsid & VSID_64K)
+ slb_vsid |= mmu_psize_defs[MMU_PAGE_64K].sllp;
+#endif
+
svcpu->slb[slb_index].esid = slb_esid;
svcpu->slb[slb_index].vsid = slb_vsid;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index c465db5..0cd760f 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -368,6 +368,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
pte.raddr = eaddr & KVM_PAM;
pte.eaddr = eaddr;
pte.vpage = eaddr >> 12;
+ pte.page_size = MMU_PAGE_64K;
}
switch (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 6/8] KVM: PPC: Book3S PR: Handle PP0 page-protection bit in guest HPTEs
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
` (4 preceding siblings ...)
2013-07-11 11:53 ` [PATCH 5/8] KVM: PPC: Book3S PR: Use 64k host pages where possible Paul Mackerras
@ 2013-07-11 11:53 ` Paul Mackerras
2013-07-11 11:54 ` [PATCH 7/8] KVM: PPC: Book3S PR: Correct errors in H_ENTER implementation Paul Mackerras
2013-07-11 11:55 ` [PATCH 8/8] KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe Paul Mackerras
7 siblings, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:53 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
64-bit POWER processors have a three-bit field for page protection in
the hashed page table entry (HPTE). Currently we only interpret the two
bits that were present in older versions of the architecture. The only
defined combination that has the new bit set is 110, meaning read-only
for supervisor and no access for user mode.
This adds code to kvmppc_mmu_book3s_64_xlate() to interpret the extra
bit appropriately.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kvm/book3s_64_mmu.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 658ccd7..563fbf7 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -298,6 +298,8 @@ do_second:
v = pteg[i];
r = pteg[i+1];
pp = (r & HPTE_R_PP) | key;
+ if (r & HPTE_R_PP0)
+ pp |= 8;
gpte->eaddr = eaddr;
gpte->vpage = kvmppc_mmu_book3s_64_ea_to_vp(vcpu, eaddr, data);
@@ -319,6 +321,7 @@ do_second:
case 3:
case 5:
case 7:
+ case 10:
gpte->may_read = true;
break;
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 7/8] KVM: PPC: Book3S PR: Correct errors in H_ENTER implementation
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
` (5 preceding siblings ...)
2013-07-11 11:53 ` [PATCH 6/8] KVM: PPC: Book3S PR: Handle PP0 page-protection bit in guest HPTEs Paul Mackerras
@ 2013-07-11 11:54 ` Paul Mackerras
2013-07-11 11:55 ` [PATCH 8/8] KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe Paul Mackerras
7 siblings, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:54 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
The implementation of H_ENTER in PR KVM has some errors:
* With H_EXACT not set, if the HPTEG is full, we return H_PTEG_FULL
as the return value of kvmppc_h_pr_enter, but the caller is expecting
one of the EMULATE_* values. The H_PTEG_FULL needs to go in the
guest's R3 instead.
* With H_EXACT set, if the selected HPTE is already valid, the H_ENTER
call should return a H_PTEG_FULL error.
This fixes these errors and also makes it write only the selected HPTE,
not the whole group, since only the selected HPTE has been modified.
This also micro-optimizes the calculations involving pte_index and i.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kvm/book3s_pr_papr.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c
index da0e0bc..38f1899 100644
--- a/arch/powerpc/kvm/book3s_pr_papr.c
+++ b/arch/powerpc/kvm/book3s_pr_papr.c
@@ -21,6 +21,8 @@
#include <asm/kvm_ppc.h>
#include <asm/kvm_book3s.h>
+#define HPTE_SIZE 16 /* bytes per HPT entry */
+
static unsigned long get_pteg_addr(struct kvm_vcpu *vcpu, long pte_index)
{
struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
@@ -40,32 +42,39 @@ static int kvmppc_h_pr_enter(struct kvm_vcpu *vcpu)
long pte_index = kvmppc_get_gpr(vcpu, 5);
unsigned long pteg[2 * 8];
unsigned long pteg_addr, i, *hpte;
+ long int ret;
+ i = pte_index & 7;
pte_index &= ~7UL;
pteg_addr = get_pteg_addr(vcpu, pte_index);
copy_from_user(pteg, (void __user *)pteg_addr, sizeof(pteg));
hpte = pteg;
+ ret = H_PTEG_FULL;
if (likely((flags & H_EXACT) == 0)) {
- pte_index &= ~7UL;
for (i = 0; ; ++i) {
if (i == 8)
- return H_PTEG_FULL;
+ goto done;
if ((*hpte & HPTE_V_VALID) == 0)
break;
hpte += 2;
}
} else {
- i = kvmppc_get_gpr(vcpu, 5) & 7UL;
hpte += i * 2;
+ if (*hpte & HPTE_V_VALID)
+ goto done;
}
hpte[0] = kvmppc_get_gpr(vcpu, 6);
hpte[1] = kvmppc_get_gpr(vcpu, 7);
- copy_to_user((void __user *)pteg_addr, pteg, sizeof(pteg));
- kvmppc_set_gpr(vcpu, 3, H_SUCCESS);
+ pteg_addr += i * HPTE_SIZE;
+ copy_to_user((void __user *)pteg_addr, hpte, HPTE_SIZE);
kvmppc_set_gpr(vcpu, 4, pte_index | i);
+ ret = H_SUCCESS;
+
+ done:
+ kvmppc_set_gpr(vcpu, 3, ret);
return EMULATE_DONE;
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 8/8] KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
` (6 preceding siblings ...)
2013-07-11 11:54 ` [PATCH 7/8] KVM: PPC: Book3S PR: Correct errors in H_ENTER implementation Paul Mackerras
@ 2013-07-11 11:55 ` Paul Mackerras
2013-07-12 1:59 ` [PATCH v2 " Paul Mackerras
7 siblings, 1 reply; 15+ messages in thread
From: Paul Mackerras @ 2013-07-11 11:55 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
This adds a per-VM mutex to provide mutual exclusion between vcpus
for accesses to and updates of the guest hashed page table (HPT).
This also makes the code use single-byte writes to the HPT entry
when updating of the reference (R) and change (C) bits. The reason
for doing this, rather than writing back the whole HPTE, is that on
non-PAPR virtual machines, the guest OS might be writing to the HPTE
concurrently, and writing back the whole HPTE might conflict with
that. Also, real hardware does single-byte writes to update R and C.
The new mutex is taken in kvmppc_mmu_book3s_64_xlate() when reading
the HPT and updating R and/or C, and in the PAPR HPT update hcalls
(H_ENTER, H_REMOVE, etc.).
The other change here is to make emulated TLB invalidations (tlbie)
effective across all vcpus. To do this we call kvmppc_mmu_pte_vflush
for all vcpus in kvmppc_ppc_book3s_64_tlbie().
With this, PR KVM can successfully run SMP guests.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/include/asm/kvm_host.h | 3 +++
arch/powerpc/kvm/book3s_64_mmu.c | 33 +++++++++++++++++++++++----------
arch/powerpc/kvm/book3s_pr.c | 1 +
arch/powerpc/kvm/book3s_pr_papr.c | 33 +++++++++++++++++++++++----------
4 files changed, 50 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 2d3c770..14935ae 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -259,6 +259,9 @@ struct kvm_arch {
struct kvmppc_vcore *vcores[KVM_MAX_VCORES];
int hpt_cma_alloc;
#endif /* CONFIG_KVM_BOOK3S_64_HV */
+#ifdef CONFIG_KVM_BOOK3S_64_PR
+ struct mutex hpt_mutex;
+#endif
#ifdef CONFIG_PPC_BOOK3S_64
struct list_head spapr_tce_tables;
struct list_head rtas_tokens;
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 563fbf7..26a57ca 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -257,6 +257,8 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
pgsize = slbe->large ? MMU_PAGE_16M : MMU_PAGE_4K;
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
+
do_second:
ptegp = kvmppc_mmu_book3s_64_get_pteg(vcpu_book3s, slbe, eaddr, second);
if (kvm_is_error_hva(ptegp))
@@ -332,30 +334,37 @@ do_second:
/* Update PTE R and C bits, so the guest's swapper knows we used the
* page */
- if (gpte->may_read) {
- /* Set the accessed flag */
+ if (gpte->may_read && !(r & HPTE_R_R)) {
+ /*
+ * Set the accessed flag.
+ * We have to write this back with a single byte write
+ * because another vcpu may be accessing this on
+ * non-PAPR platforms such as mac99, and this is
+ * what real hardware does.
+ */
+ char __user *addr = (char __user *) &pteg[i+1];
r |= HPTE_R_R;
+ put_user(r >> 8, addr + 6);
}
- if (data && gpte->may_write) {
+ if (data && gpte->may_write && !(r & HPTE_R_C)) {
/* Set the dirty flag -- XXX even if not writing */
+ /* Use a single byte write */
+ char __user *addr = (char __user *) &pteg[i+1];
r |= HPTE_R_C;
+ put_user(r, addr + 7);
}
- /* Write back into the PTEG */
- if (pteg[i+1] != r) {
- pteg[i+1] = r;
- copy_to_user((void __user *)ptegp, pteg, sizeof(pteg));
- }
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
if (!gpte->may_read)
return -EPERM;
return 0;
no_page_found:
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
return -ENOENT;
no_seg_found:
-
dprintk("KVM MMU: Trigger segment fault\n");
return -EINVAL;
}
@@ -520,6 +529,8 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
bool large)
{
u64 mask = 0xFFFFFFFFFULL;
+ long i;
+ struct kvm_vcpu *v;
dprintk("KVM MMU: tlbie(0x%lx)\n", va);
@@ -542,7 +553,9 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
if (large)
mask = 0xFFFFFF000ULL;
}
- kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask);
+ /* flush this VA on all vcpus */
+ kvm_for_each_vcpu(i, v, vcpu->kvm)
+ kvmppc_mmu_pte_vflush(v, va >> 12, mask);
}
#ifdef CONFIG_PPC_64K_PAGES
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 0cd760f..d19547a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1326,6 +1326,7 @@ int kvmppc_core_init_vm(struct kvm *kvm)
INIT_LIST_HEAD(&kvm->arch.spapr_tce_tables);
INIT_LIST_HEAD(&kvm->arch.rtas_tokens);
#endif
+ mutex_init(&kvm->arch.hpt_mutex);
if (firmware_has_feature(FW_FEATURE_SET_MODE)) {
spin_lock(&kvm_global_user_count_lock);
diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c
index 38f1899..5efa97b 100644
--- a/arch/powerpc/kvm/book3s_pr_papr.c
+++ b/arch/powerpc/kvm/book3s_pr_papr.c
@@ -48,6 +48,7 @@ static int kvmppc_h_pr_enter(struct kvm_vcpu *vcpu)
pte_index &= ~7UL;
pteg_addr = get_pteg_addr(vcpu, pte_index);
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
copy_from_user(pteg, (void __user *)pteg_addr, sizeof(pteg));
hpte = pteg;
@@ -74,6 +75,7 @@ static int kvmppc_h_pr_enter(struct kvm_vcpu *vcpu)
ret = H_SUCCESS;
done:
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
kvmppc_set_gpr(vcpu, 3, ret);
return EMULATE_DONE;
@@ -86,26 +88,31 @@ static int kvmppc_h_pr_remove(struct kvm_vcpu *vcpu)
unsigned long avpn = kvmppc_get_gpr(vcpu, 6);
unsigned long v = 0, pteg, rb;
unsigned long pte[2];
+ long int ret;
pteg = get_pteg_addr(vcpu, pte_index);
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
copy_from_user(pte, (void __user *)pteg, sizeof(pte));
+ ret = H_NOT_FOUND;
if ((pte[0] & HPTE_V_VALID) == 0 ||
((flags & H_AVPN) && (pte[0] & ~0x7fUL) != avpn) ||
- ((flags & H_ANDCOND) && (pte[0] & avpn) != 0)) {
- kvmppc_set_gpr(vcpu, 3, H_NOT_FOUND);
- return EMULATE_DONE;
- }
+ ((flags & H_ANDCOND) && (pte[0] & avpn) != 0))
+ goto done;
copy_to_user((void __user *)pteg, &v, sizeof(v));
rb = compute_tlbie_rb(pte[0], pte[1], pte_index);
vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false);
- kvmppc_set_gpr(vcpu, 3, H_SUCCESS);
+ ret = H_SUCCESS;
kvmppc_set_gpr(vcpu, 4, pte[0]);
kvmppc_set_gpr(vcpu, 5, pte[1]);
+ done:
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
+ kvmppc_set_gpr(vcpu, 3, ret);
+
return EMULATE_DONE;
}
@@ -133,6 +140,7 @@ static int kvmppc_h_pr_bulk_remove(struct kvm_vcpu *vcpu)
int paramnr = 4;
int ret = H_SUCCESS;
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
for (i = 0; i < H_BULK_REMOVE_MAX_BATCH; i++) {
unsigned long tsh = kvmppc_get_gpr(vcpu, paramnr+(2*i));
unsigned long tsl = kvmppc_get_gpr(vcpu, paramnr+(2*i)+1);
@@ -181,6 +189,7 @@ static int kvmppc_h_pr_bulk_remove(struct kvm_vcpu *vcpu)
}
kvmppc_set_gpr(vcpu, paramnr+(2*i), tsh);
}
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
kvmppc_set_gpr(vcpu, 3, ret);
return EMULATE_DONE;
@@ -193,15 +202,16 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu)
unsigned long avpn = kvmppc_get_gpr(vcpu, 6);
unsigned long rb, pteg, r, v;
unsigned long pte[2];
+ long int ret;
pteg = get_pteg_addr(vcpu, pte_index);
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
copy_from_user(pte, (void __user *)pteg, sizeof(pte));
+ ret = H_NOT_FOUND;
if ((pte[0] & HPTE_V_VALID) == 0 ||
- ((flags & H_AVPN) && (pte[0] & ~0x7fUL) != avpn)) {
- kvmppc_set_gpr(vcpu, 3, H_NOT_FOUND);
- return EMULATE_DONE;
- }
+ ((flags & H_AVPN) && (pte[0] & ~0x7fUL) != avpn))
+ goto done;
v = pte[0];
r = pte[1];
@@ -216,8 +226,11 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu)
rb = compute_tlbie_rb(v, r, pte_index);
vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false);
copy_to_user((void __user *)pteg, pte, sizeof(pte));
+ ret = H_SUCCESS;
- kvmppc_set_gpr(vcpu, 3, H_SUCCESS);
+ done:
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
+ kvmppc_set_gpr(vcpu, 3, ret);
return EMULATE_DONE;
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v2 8/8] KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe
2013-07-11 11:55 ` [PATCH 8/8] KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe Paul Mackerras
@ 2013-07-12 1:59 ` Paul Mackerras
0 siblings, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-12 1:59 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
This adds a per-VM mutex to provide mutual exclusion between vcpus
for accesses to and updates of the guest hashed page table (HPT).
This also makes the code use single-byte writes to the HPT entry
when updating of the reference (R) and change (C) bits. The reason
for doing this, rather than writing back the whole HPTE, is that on
non-PAPR virtual machines, the guest OS might be writing to the HPTE
concurrently, and writing back the whole HPTE might conflict with
that. Also, real hardware does single-byte writes to update R and C.
The new mutex is taken in kvmppc_mmu_book3s_64_xlate() when reading
the HPT and updating R and/or C, and in the PAPR HPT update hcalls
(H_ENTER, H_REMOVE, etc.). Having the mutex means that we don't need
to use a hypervisor lock bit in the HPT update hcalls, and we don't
need to be careful about the order in which the bytes of the HPTE are
updated by those hcalls.
The other change here is to make emulated TLB invalidations (tlbie)
effective across all vcpus. To do this we call kvmppc_mmu_pte_vflush
for all vcpus in kvmppc_ppc_book3s_64_tlbie().
For 32-bit, this makes the setting of the accessed and dirty bits use
single-byte writes, and makes tlbie invalidate shadow HPTEs for all
vcpus.
With this, PR KVM can successfully run SMP guests.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
v2: Make it compile on 32-bit, and update 32-bit MMU code as well.
arch/powerpc/include/asm/kvm_host.h | 3 +++
arch/powerpc/kvm/book3s_32_mmu.c | 36 +++++++++++++++++++++--------------
arch/powerpc/kvm/book3s_64_mmu.c | 33 ++++++++++++++++++++++----------
arch/powerpc/kvm/book3s_pr.c | 1 +
arch/powerpc/kvm/book3s_pr_papr.c | 33 ++++++++++++++++++++++----------
5 files changed, 72 insertions(+), 34 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 2d3c770..c37207f 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -259,6 +259,9 @@ struct kvm_arch {
struct kvmppc_vcore *vcores[KVM_MAX_VCORES];
int hpt_cma_alloc;
#endif /* CONFIG_KVM_BOOK3S_64_HV */
+#ifdef CONFIG_KVM_BOOK3S_PR
+ struct mutex hpt_mutex;
+#endif
#ifdef CONFIG_PPC_BOOK3S_64
struct list_head spapr_tce_tables;
struct list_head rtas_tokens;
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index af04553..856af98 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -271,19 +271,22 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
/* Update PTE C and A bits, so the guest's swapper knows we used the
page */
if (found) {
- u32 oldpte = pteg[i+1];
-
- if (pte->may_read)
- pteg[i+1] |= PTEG_FLAG_ACCESSED;
- if (pte->may_write)
- pteg[i+1] |= PTEG_FLAG_DIRTY;
- else
- dprintk_pte("KVM: Mapping read-only page!\n");
-
- /* Write back into the PTEG */
- if (pteg[i+1] != oldpte)
- copy_to_user((void __user *)ptegp, pteg, sizeof(pteg));
-
+ u32 pte_r = pteg[i+1];
+ char __user *addr = (char __user *) &pteg[i+1];
+
+ /*
+ * Use single-byte writes to update the HPTE, to
+ * conform to what real hardware does.
+ */
+ if (pte->may_read && !(pte_r & PTEG_FLAG_ACCESSED)) {
+ pte_r |= PTEG_FLAG_ACCESSED;
+ put_user(pte_r >> 8, addr + 2);
+ }
+ if (pte->may_write && !(pte_r & PTEG_FLAG_DIRTY)) {
+ /* XXX should only set this for stores */
+ pte_r |= PTEG_FLAG_DIRTY;
+ put_user(pte_r, addr + 3);
+ }
return 0;
}
@@ -348,7 +351,12 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool large)
{
- kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
+ int i;
+ struct kvm_vcpu *v;
+
+ /* flush this VA on all cpus */
+ kvm_for_each_vcpu(i, v, vcpu->kvm)
+ kvmppc_mmu_pte_flush(v, ea, 0x0FFFF000);
}
static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 563fbf7..26a57ca 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -257,6 +257,8 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
pgsize = slbe->large ? MMU_PAGE_16M : MMU_PAGE_4K;
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
+
do_second:
ptegp = kvmppc_mmu_book3s_64_get_pteg(vcpu_book3s, slbe, eaddr, second);
if (kvm_is_error_hva(ptegp))
@@ -332,30 +334,37 @@ do_second:
/* Update PTE R and C bits, so the guest's swapper knows we used the
* page */
- if (gpte->may_read) {
- /* Set the accessed flag */
+ if (gpte->may_read && !(r & HPTE_R_R)) {
+ /*
+ * Set the accessed flag.
+ * We have to write this back with a single byte write
+ * because another vcpu may be accessing this on
+ * non-PAPR platforms such as mac99, and this is
+ * what real hardware does.
+ */
+ char __user *addr = (char __user *) &pteg[i+1];
r |= HPTE_R_R;
+ put_user(r >> 8, addr + 6);
}
- if (data && gpte->may_write) {
+ if (data && gpte->may_write && !(r & HPTE_R_C)) {
/* Set the dirty flag -- XXX even if not writing */
+ /* Use a single byte write */
+ char __user *addr = (char __user *) &pteg[i+1];
r |= HPTE_R_C;
+ put_user(r, addr + 7);
}
- /* Write back into the PTEG */
- if (pteg[i+1] != r) {
- pteg[i+1] = r;
- copy_to_user((void __user *)ptegp, pteg, sizeof(pteg));
- }
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
if (!gpte->may_read)
return -EPERM;
return 0;
no_page_found:
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
return -ENOENT;
no_seg_found:
-
dprintk("KVM MMU: Trigger segment fault\n");
return -EINVAL;
}
@@ -520,6 +529,8 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
bool large)
{
u64 mask = 0xFFFFFFFFFULL;
+ long i;
+ struct kvm_vcpu *v;
dprintk("KVM MMU: tlbie(0x%lx)\n", va);
@@ -542,7 +553,9 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
if (large)
mask = 0xFFFFFF000ULL;
}
- kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask);
+ /* flush this VA on all vcpus */
+ kvm_for_each_vcpu(i, v, vcpu->kvm)
+ kvmppc_mmu_pte_vflush(v, va >> 12, mask);
}
#ifdef CONFIG_PPC_64K_PAGES
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 0cd760f..d19547a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1326,6 +1326,7 @@ int kvmppc_core_init_vm(struct kvm *kvm)
INIT_LIST_HEAD(&kvm->arch.spapr_tce_tables);
INIT_LIST_HEAD(&kvm->arch.rtas_tokens);
#endif
+ mutex_init(&kvm->arch.hpt_mutex);
if (firmware_has_feature(FW_FEATURE_SET_MODE)) {
spin_lock(&kvm_global_user_count_lock);
diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c
index 38f1899..5efa97b 100644
--- a/arch/powerpc/kvm/book3s_pr_papr.c
+++ b/arch/powerpc/kvm/book3s_pr_papr.c
@@ -48,6 +48,7 @@ static int kvmppc_h_pr_enter(struct kvm_vcpu *vcpu)
pte_index &= ~7UL;
pteg_addr = get_pteg_addr(vcpu, pte_index);
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
copy_from_user(pteg, (void __user *)pteg_addr, sizeof(pteg));
hpte = pteg;
@@ -74,6 +75,7 @@ static int kvmppc_h_pr_enter(struct kvm_vcpu *vcpu)
ret = H_SUCCESS;
done:
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
kvmppc_set_gpr(vcpu, 3, ret);
return EMULATE_DONE;
@@ -86,26 +88,31 @@ static int kvmppc_h_pr_remove(struct kvm_vcpu *vcpu)
unsigned long avpn = kvmppc_get_gpr(vcpu, 6);
unsigned long v = 0, pteg, rb;
unsigned long pte[2];
+ long int ret;
pteg = get_pteg_addr(vcpu, pte_index);
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
copy_from_user(pte, (void __user *)pteg, sizeof(pte));
+ ret = H_NOT_FOUND;
if ((pte[0] & HPTE_V_VALID) == 0 ||
((flags & H_AVPN) && (pte[0] & ~0x7fUL) != avpn) ||
- ((flags & H_ANDCOND) && (pte[0] & avpn) != 0)) {
- kvmppc_set_gpr(vcpu, 3, H_NOT_FOUND);
- return EMULATE_DONE;
- }
+ ((flags & H_ANDCOND) && (pte[0] & avpn) != 0))
+ goto done;
copy_to_user((void __user *)pteg, &v, sizeof(v));
rb = compute_tlbie_rb(pte[0], pte[1], pte_index);
vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false);
- kvmppc_set_gpr(vcpu, 3, H_SUCCESS);
+ ret = H_SUCCESS;
kvmppc_set_gpr(vcpu, 4, pte[0]);
kvmppc_set_gpr(vcpu, 5, pte[1]);
+ done:
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
+ kvmppc_set_gpr(vcpu, 3, ret);
+
return EMULATE_DONE;
}
@@ -133,6 +140,7 @@ static int kvmppc_h_pr_bulk_remove(struct kvm_vcpu *vcpu)
int paramnr = 4;
int ret = H_SUCCESS;
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
for (i = 0; i < H_BULK_REMOVE_MAX_BATCH; i++) {
unsigned long tsh = kvmppc_get_gpr(vcpu, paramnr+(2*i));
unsigned long tsl = kvmppc_get_gpr(vcpu, paramnr+(2*i)+1);
@@ -181,6 +189,7 @@ static int kvmppc_h_pr_bulk_remove(struct kvm_vcpu *vcpu)
}
kvmppc_set_gpr(vcpu, paramnr+(2*i), tsh);
}
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
kvmppc_set_gpr(vcpu, 3, ret);
return EMULATE_DONE;
@@ -193,15 +202,16 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu)
unsigned long avpn = kvmppc_get_gpr(vcpu, 6);
unsigned long rb, pteg, r, v;
unsigned long pte[2];
+ long int ret;
pteg = get_pteg_addr(vcpu, pte_index);
+ mutex_lock(&vcpu->kvm->arch.hpt_mutex);
copy_from_user(pte, (void __user *)pteg, sizeof(pte));
+ ret = H_NOT_FOUND;
if ((pte[0] & HPTE_V_VALID) == 0 ||
- ((flags & H_AVPN) && (pte[0] & ~0x7fUL) != avpn)) {
- kvmppc_set_gpr(vcpu, 3, H_NOT_FOUND);
- return EMULATE_DONE;
- }
+ ((flags & H_AVPN) && (pte[0] & ~0x7fUL) != avpn))
+ goto done;
v = pte[0];
r = pte[1];
@@ -216,8 +226,11 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu)
rb = compute_tlbie_rb(v, r, pte_index);
vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false);
copy_to_user((void __user *)pteg, pte, sizeof(pte));
+ ret = H_SUCCESS;
- kvmppc_set_gpr(vcpu, 3, H_SUCCESS);
+ done:
+ mutex_unlock(&vcpu->kvm->arch.hpt_mutex);
+ kvmppc_set_gpr(vcpu, 3, ret);
return EMULATE_DONE;
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v2 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
2013-07-11 11:50 ` [PATCH 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu Paul Mackerras
@ 2013-07-13 12:21 ` Paul Mackerras
2013-07-25 13:54 ` [PATCH " Alexander Graf
1 sibling, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-07-13 12:21 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 478.2ms to 480.1ms (averages of 4 values), a difference which is
not statistically significant given the variability of the results.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
v2: Doesn't break compilation of non-book3s targets.
arch/powerpc/include/asm/kvm_book3s.h | 193 +++++-------------------------
arch/powerpc/include/asm/kvm_book3s_asm.h | 6 +-
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/kernel/asm-offsets.c | 3 +-
arch/powerpc/kvm/book3s_emulate.c | 8 +-
arch/powerpc/kvm/book3s_interrupts.S | 101 ++++++++++++++++
arch/powerpc/kvm/book3s_pr.c | 68 +++++------
arch/powerpc/kvm/book3s_rmhandlers.S | 5 -
arch/powerpc/kvm/trace.h | 7 +-
9 files changed, 175 insertions(+), 217 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 08891d0..5d68f6c 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -198,149 +198,97 @@ extern void kvm_return_point(void);
#include <asm/kvm_book3s_64.h>
#endif
-#ifdef CONFIG_KVM_BOOK3S_PR
-
-static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
-{
- return to_book3s(vcpu)->hior;
-}
-
-static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
- unsigned long pending_now, unsigned long old_pending)
-{
- if (pending_now)
- vcpu->arch.shared->int_pending = 1;
- else if (old_pending)
- vcpu->arch.shared->int_pending = 0;
-}
-
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
- if ( num < 14 ) {
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->gpr[num] = val;
- svcpu_put(svcpu);
- to_book3s(vcpu)->shadow_vcpu->gpr[num] = val;
- } else
- vcpu->arch.gpr[num] = val;
+ vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
- if ( num < 14 ) {
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r = svcpu->gpr[num];
- svcpu_put(svcpu);
- return r;
- } else
- return vcpu->arch.gpr[num];
+ return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->cr = val;
- svcpu_put(svcpu);
- to_book3s(vcpu)->shadow_vcpu->cr = val;
+ vcpu->arch.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- u32 r;
- r = svcpu->cr;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->xer = val;
- to_book3s(vcpu)->shadow_vcpu->xer = val;
- svcpu_put(svcpu);
+ vcpu->arch.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- u32 r;
- r = svcpu->xer;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.xer;
}
static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->ctr = val;
- svcpu_put(svcpu);
+ vcpu->arch.ctr = val;
}
static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r;
- r = svcpu->ctr;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.ctr;
}
static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->lr = val;
- svcpu_put(svcpu);
+ vcpu->arch.lr = val;
}
static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r;
- r = svcpu->lr;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.lr;
}
static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- svcpu->pc = val;
- svcpu_put(svcpu);
+ vcpu->arch.pc = val;
}
static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r;
- r = svcpu->pc;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.pc;
}
static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
{
ulong pc = kvmppc_get_pc(vcpu);
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- u32 r;
/* Load the instruction manually if it failed to do so in the
* exit path */
- if (svcpu->last_inst == KVM_INST_FETCH_FAILED)
- kvmppc_ld(vcpu, &pc, sizeof(u32), &svcpu->last_inst, false);
+ if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
+ kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
- r = svcpu->last_inst;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.last_inst;
}
static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong r;
- r = svcpu->fault_dar;
- svcpu_put(svcpu);
- return r;
+ return vcpu->arch.fault_dar;
+}
+
+#ifdef CONFIG_KVM_BOOK3S_PR
+
+static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
+{
+ return to_book3s(vcpu)->hior;
+}
+
+static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
+ unsigned long pending_now, unsigned long old_pending)
+{
+ if (pending_now)
+ vcpu->arch.shared->int_pending = 1;
+ else if (old_pending)
+ vcpu->arch.shared->int_pending = 0;
}
static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
@@ -374,83 +322,6 @@ static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
{
}
-static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
-{
- vcpu->arch.gpr[num] = val;
-}
-
-static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
-{
- return vcpu->arch.gpr[num];
-}
-
-static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
-{
- vcpu->arch.cr = val;
-}
-
-static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.cr;
-}
-
-static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
-{
- vcpu->arch.xer = val;
-}
-
-static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.xer;
-}
-
-static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
-{
- vcpu->arch.ctr = val;
-}
-
-static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.ctr;
-}
-
-static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
-{
- vcpu->arch.lr = val;
-}
-
-static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.lr;
-}
-
-static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
-{
- vcpu->arch.pc = val;
-}
-
-static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.pc;
-}
-
-static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
-{
- ulong pc = kvmppc_get_pc(vcpu);
-
- /* Load the instruction manually if it failed to do so in the
- * exit path */
- if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
- kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
-
- return vcpu->arch.last_inst;
-}
-
-static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.fault_dar;
-}
-
static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
{
return false;
diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
index 9039d3c..4141409 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -108,14 +108,14 @@ struct kvmppc_book3s_shadow_vcpu {
ulong gpr[14];
u32 cr;
u32 xer;
-
- u32 fault_dsisr;
- u32 last_inst;
ulong ctr;
ulong lr;
ulong pc;
+
ulong shadow_srr1;
ulong fault_dar;
+ u32 fault_dsisr;
+ u32 last_inst;
#ifdef CONFIG_PPC_BOOK3S_32
u32 sr[16]; /* Guest SRs */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 3328353..7b26395 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -463,6 +463,7 @@ struct kvm_vcpu_arch {
u32 ctrl;
ulong dabr;
ulong cfar;
+ ulong shadow_srr1;
#endif
u32 vrsave; /* also USPRG0 */
u32 mmucr;
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index a67c76e..aa61629 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -515,12 +515,11 @@ int main(void)
DEFINE(VCPU_TRAP, offsetof(struct kvm_vcpu, arch.trap));
DEFINE(VCPU_PTID, offsetof(struct kvm_vcpu, arch.ptid));
DEFINE(VCPU_CFAR, offsetof(struct kvm_vcpu, arch.cfar));
+ DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
DEFINE(VCORE_ENTRY_EXIT, offsetof(struct kvmppc_vcore, entry_exit_count));
DEFINE(VCORE_NAP_COUNT, offsetof(struct kvmppc_vcore, nap_count));
DEFINE(VCORE_IN_GUEST, offsetof(struct kvmppc_vcore, in_guest));
DEFINE(VCORE_NAPPING_THREADS, offsetof(struct kvmppc_vcore, napping_threads));
- DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
- offsetof(struct kvmppc_vcpu_book3s, vcpu));
DEFINE(VCPU_SLB_E, offsetof(struct kvmppc_slb, orige));
DEFINE(VCPU_SLB_V, offsetof(struct kvmppc_slb, origv));
DEFINE(VCPU_SLB_SIZE, sizeof(struct kvmppc_slb));
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 360ce68..34044b1 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -267,12 +267,9 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
r = kvmppc_st(vcpu, &addr, 32, zeros, true);
if ((r == -ENOENT) || (r == -EPERM)) {
- struct kvmppc_book3s_shadow_vcpu *svcpu;
-
- svcpu = svcpu_get(vcpu);
*advance = 0;
vcpu->arch.shared->dar = vaddr;
- svcpu->fault_dar = vaddr;
+ vcpu->arch.fault_dar = vaddr;
dsisr = DSISR_ISSTORE;
if (r == -ENOENT)
@@ -281,8 +278,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
dsisr |= DSISR_PROTFAULT;
vcpu->arch.shared->dsisr = dsisr;
- svcpu->fault_dsisr = dsisr;
- svcpu_put(svcpu);
+ vcpu->arch.fault_dsisr = dsisr;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_DATA_STORAGE);
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index 17cfae5..c935195 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -26,8 +26,12 @@
#if defined(CONFIG_PPC_BOOK3S_64)
#define FUNC(name) GLUE(.,name)
+#define GET_SHADOW_VCPU(reg) mr reg, r13
+
#elif defined(CONFIG_PPC_BOOK3S_32)
#define FUNC(name) name
+#define GET_SHADOW_VCPU(reg) lwz reg, (THREAD + THREAD_KVM_SVCPU)(r2)
+
#endif /* CONFIG_PPC_BOOK3S_XX */
#define VCPU_LOAD_NVGPRS(vcpu) \
@@ -87,8 +91,49 @@ kvm_start_entry:
VCPU_LOAD_NVGPRS(r4)
kvm_start_lightweight:
+ /* Copy registers into shadow vcpu so we can access them in real mode */
+ GET_SHADOW_VCPU(r3)
+ PPC_LL r5, VCPU_GPR(R0)(r4)
+ PPC_LL r6, VCPU_GPR(R1)(r4)
+ PPC_LL r7, VCPU_GPR(R2)(r4)
+ PPC_LL r8, VCPU_GPR(R3)(r4)
+ PPC_STL r5, SVCPU_R0(r3)
+ PPC_STL r6, SVCPU_R1(r3)
+ PPC_STL r7, SVCPU_R2(r3)
+ PPC_STL r8, SVCPU_R3(r3)
+ PPC_LL r5, VCPU_GPR(R4)(r4)
+ PPC_LL r6, VCPU_GPR(R5)(r4)
+ PPC_LL r7, VCPU_GPR(R6)(r4)
+ PPC_LL r8, VCPU_GPR(R7)(r4)
+ PPC_STL r5, SVCPU_R4(r3)
+ PPC_STL r6, SVCPU_R5(r3)
+ PPC_STL r7, SVCPU_R6(r3)
+ PPC_STL r8, SVCPU_R7(r3)
+ PPC_LL r5, VCPU_GPR(R8)(r4)
+ PPC_LL r6, VCPU_GPR(R9)(r4)
+ PPC_LL r7, VCPU_GPR(R10)(r4)
+ PPC_LL r8, VCPU_GPR(R11)(r4)
+ PPC_STL r5, SVCPU_R8(r3)
+ PPC_STL r6, SVCPU_R9(r3)
+ PPC_STL r7, SVCPU_R10(r3)
+ PPC_STL r8, SVCPU_R11(r3)
+ PPC_LL r5, VCPU_GPR(R12)(r4)
+ PPC_LL r6, VCPU_GPR(R13)(r4)
+ lwz r7, VCPU_CR(r4)
+ PPC_LL r8, VCPU_XER(r4)
+ PPC_STL r5, SVCPU_R12(r3)
+ PPC_STL r6, SVCPU_R13(r3)
+ stw r7, SVCPU_CR(r3)
+ stw r8, SVCPU_XER(r3)
+ PPC_LL r5, VCPU_CTR(r4)
+ PPC_LL r6, VCPU_LR(r4)
+ PPC_LL r7, VCPU_PC(r4)
+ PPC_STL r5, SVCPU_CTR(r3)
+ PPC_STL r6, SVCPU_LR(r3)
+ PPC_STL r7, SVCPU_PC(r3)
#ifdef CONFIG_PPC_BOOK3S_64
+ /* Get the dcbz32 flag */
PPC_LL r3, VCPU_HFLAGS(r4)
rldicl r3, r3, 0, 63 /* r3 &= 1 */
stb r3, HSTATE_RESTORE_HID5(r13)
@@ -128,6 +173,61 @@ kvmppc_handler_highmem:
/* R7 = vcpu */
PPC_LL r7, GPR4(r1)
+ /* Transfer reg values from shadow vcpu back to vcpu struct */
+ /* On 64-bit, interrupts are still off at this point */
+ GET_SHADOW_VCPU(r4)
+ PPC_LL r5, SVCPU_R0(r4)
+ PPC_LL r6, SVCPU_R1(r4)
+ PPC_LL r3, SVCPU_R2(r4)
+ PPC_LL r8, SVCPU_R3(r4)
+ PPC_STL r5, VCPU_GPR(R0)(r7)
+ PPC_STL r6, VCPU_GPR(R1)(r7)
+ PPC_STL r3, VCPU_GPR(R2)(r7)
+ PPC_STL r8, VCPU_GPR(R3)(r7)
+ PPC_LL r5, SVCPU_R4(r4)
+ PPC_LL r6, SVCPU_R5(r4)
+ PPC_LL r3, SVCPU_R6(r4)
+ PPC_LL r8, SVCPU_R7(r4)
+ PPC_STL r5, VCPU_GPR(R4)(r7)
+ PPC_STL r6, VCPU_GPR(R5)(r7)
+ PPC_STL r3, VCPU_GPR(R6)(r7)
+ PPC_STL r8, VCPU_GPR(R7)(r7)
+ PPC_LL r5, SVCPU_R8(r4)
+ PPC_LL r6, SVCPU_R9(r4)
+ PPC_LL r3, SVCPU_R10(r4)
+ PPC_LL r8, SVCPU_R11(r4)
+ PPC_STL r5, VCPU_GPR(R8)(r7)
+ PPC_STL r6, VCPU_GPR(R9)(r7)
+ PPC_STL r3, VCPU_GPR(R10)(r7)
+ PPC_STL r8, VCPU_GPR(R11)(r7)
+ PPC_LL r5, SVCPU_R12(r4)
+ PPC_LL r6, SVCPU_R13(r4)
+ lwz r3, SVCPU_CR(r4)
+ lwz r8, SVCPU_XER(r4)
+ PPC_STL r5, VCPU_GPR(R12)(r7)
+ PPC_STL r6, VCPU_GPR(R13)(r7)
+ stw r3, VCPU_CR(r7)
+ PPC_STL r8, VCPU_XER(r7)
+ PPC_LL r5, SVCPU_CTR(r4)
+ PPC_LL r6, SVCPU_LR(r4)
+ PPC_LL r3, SVCPU_PC(r4)
+ PPC_LL r8, SVCPU_SHADOW_SRR1(r4)
+ PPC_STL r5, VCPU_CTR(r7)
+ PPC_STL r6, VCPU_LR(r7)
+ PPC_STL r3, VCPU_PC(r7)
+ PPC_STL r8, VCPU_SHADOW_SRR1(r7)
+ PPC_LL r5, SVCPU_FAULT_DAR(r4)
+ lwz r6, SVCPU_FAULT_DSISR(r4)
+ lwz r3, SVCPU_LAST_INST(r4)
+ PPC_STL r5, VCPU_FAULT_DAR(r7)
+ stw r6, VCPU_FAULT_DSISR(r7)
+ stw r3, VCPU_LAST_INST(r7)
+
+ /* Re-enable interrupts */
+ mfmsr r3
+ ori r3, r3, MSR_EE
+ MTMSR_EERI(r3)
+
#ifdef CONFIG_PPC_BOOK3S_64
/*
* Reload kernel SPRG3 value.
@@ -135,6 +235,7 @@ kvmppc_handler_highmem:
*/
ld r3, PACA_SPRG3(r13)
mtspr SPRN_SPRG3, r3
+
#endif /* CONFIG_PPC_BOOK3S_64 */
PPC_STL r14, VCPU_GPR(R14)(r7)
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index ddfaf56..5aa64e2 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -61,8 +61,6 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
#ifdef CONFIG_PPC_BOOK3S_64
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb));
- memcpy(&get_paca()->shadow_vcpu, to_book3s(vcpu)->shadow_vcpu,
- sizeof(get_paca()->shadow_vcpu));
svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max;
svcpu_put(svcpu);
#endif
@@ -77,8 +75,6 @@ void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
#ifdef CONFIG_PPC_BOOK3S_64
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb));
- memcpy(to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
- sizeof(get_paca()->shadow_vcpu));
to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max;
svcpu_put(svcpu);
#endif
@@ -388,22 +384,18 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
if (page_found == -ENOENT) {
/* Page not found in guest PTE entries */
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
vcpu->arch.shared->dar = kvmppc_get_fault_dar(vcpu);
- vcpu->arch.shared->dsisr = svcpu->fault_dsisr;
+ vcpu->arch.shared->dsisr = vcpu->arch.fault_dsisr;
vcpu->arch.shared->msr |=
- (svcpu->shadow_srr1 & 0x00000000f8000000ULL);
- svcpu_put(svcpu);
+ vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL;
kvmppc_book3s_queue_irqprio(vcpu, vec);
} else if (page_found == -EPERM) {
/* Storage protection */
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
vcpu->arch.shared->dar = kvmppc_get_fault_dar(vcpu);
- vcpu->arch.shared->dsisr = svcpu->fault_dsisr & ~DSISR_NOHPTE;
+ vcpu->arch.shared->dsisr = vcpu->arch.fault_dsisr & ~DSISR_NOHPTE;
vcpu->arch.shared->dsisr |= DSISR_PROTFAULT;
vcpu->arch.shared->msr |=
- svcpu->shadow_srr1 & 0x00000000f8000000ULL;
- svcpu_put(svcpu);
+ vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL;
kvmppc_book3s_queue_irqprio(vcpu, vec);
} else if (page_found == -EINVAL) {
/* Page not found in guest SLB */
@@ -623,21 +615,26 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
switch (exit_nr) {
case BOOK3S_INTERRUPT_INST_STORAGE:
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong shadow_srr1 = svcpu->shadow_srr1;
+ ulong shadow_srr1 = vcpu->arch.shadow_srr1;
vcpu->stat.pf_instruc++;
#ifdef CONFIG_PPC_BOOK3S_32
/* We set segments as unused segments when invalidating them. So
* treat the respective fault as segment fault. */
- if (svcpu->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT] == SR_INVALID) {
- kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
- r = RESUME_GUEST;
+ {
+ struct kvmppc_book3s_shadow_vcpu *svcpu;
+ u32 sr;
+
+ svcpu = svcpu_get(vcpu);
+ sr = svcpu->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT];
svcpu_put(svcpu);
- break;
+ if (sr == SR_INVALID) {
+ kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
+ r = RESUME_GUEST;
+ break;
+ }
}
#endif
- svcpu_put(svcpu);
/* only care about PTEG not found errors, but leave NX alone */
if (shadow_srr1 & 0x40000000) {
@@ -662,21 +659,26 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
case BOOK3S_INTERRUPT_DATA_STORAGE:
{
ulong dar = kvmppc_get_fault_dar(vcpu);
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- u32 fault_dsisr = svcpu->fault_dsisr;
+ u32 fault_dsisr = vcpu->arch.fault_dsisr;
vcpu->stat.pf_storage++;
#ifdef CONFIG_PPC_BOOK3S_32
/* We set segments as unused segments when invalidating them. So
* treat the respective fault as segment fault. */
- if ((svcpu->sr[dar >> SID_SHIFT]) == SR_INVALID) {
- kvmppc_mmu_map_segment(vcpu, dar);
- r = RESUME_GUEST;
+ {
+ struct kvmppc_book3s_shadow_vcpu *svcpu;
+ u32 sr;
+
+ svcpu = svcpu_get(vcpu);
+ sr = svcpu->sr[dar >> SID_SHIFT];
svcpu_put(svcpu);
- break;
+ if (sr == SR_INVALID) {
+ kvmppc_mmu_map_segment(vcpu, dar);
+ r = RESUME_GUEST;
+ break;
+ }
}
#endif
- svcpu_put(svcpu);
/* The only case we need to handle is missing shadow PTEs */
if (fault_dsisr & DSISR_NOHPTE) {
@@ -723,13 +725,10 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
{
enum emulation_result er;
- struct kvmppc_book3s_shadow_vcpu *svcpu;
ulong flags;
program_interrupt:
- svcpu = svcpu_get(vcpu);
- flags = svcpu->shadow_srr1 & 0x1f0000ull;
- svcpu_put(svcpu);
+ flags = vcpu->arch.shadow_srr1 & 0x1f0000ull;
if (vcpu->arch.shared->msr & MSR_PR) {
#ifdef EXIT_DEBUG
@@ -861,9 +860,7 @@ program_interrupt:
break;
default:
{
- struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
- ulong shadow_srr1 = svcpu->shadow_srr1;
- svcpu_put(svcpu);
+ ulong shadow_srr1 = vcpu->arch.shadow_srr1;
/* Ugh - bork here! What did we get? */
printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | msr=0x%lx\n",
exit_nr, kvmppc_get_pc(vcpu), shadow_srr1);
@@ -1037,11 +1034,12 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
if (!vcpu_book3s)
goto out;
+#ifdef CONFIG_KVM_BOOK3S_32
vcpu_book3s->shadow_vcpu =
kzalloc(sizeof(*vcpu_book3s->shadow_vcpu), GFP_KERNEL);
if (!vcpu_book3s->shadow_vcpu)
goto free_vcpu;
-
+#endif
vcpu = &vcpu_book3s->vcpu;
err = kvm_vcpu_init(vcpu, kvm, id);
if (err)
@@ -1074,8 +1072,10 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
uninit_vcpu:
kvm_vcpu_uninit(vcpu);
free_shadow_vcpu:
+#ifdef CONFIG_KVM_BOOK3S_32
kfree(vcpu_book3s->shadow_vcpu);
free_vcpu:
+#endif
vfree(vcpu_book3s);
out:
return ERR_PTR(err);
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index 8f7633e..b64d7f9 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -179,11 +179,6 @@ _GLOBAL(kvmppc_entry_trampoline)
li r6, MSR_IR | MSR_DR
andc r6, r5, r6 /* Clear DR and IR in MSR value */
- /*
- * Set EE in HOST_MSR so that it's enabled when we get into our
- * C exit handler function
- */
- ori r5, r5, MSR_EE
mtsrr0 r7
mtsrr1 r6
RFI
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index e326489..a088e9a 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -101,17 +101,12 @@ TRACE_EVENT(kvm_exit,
),
TP_fast_assign(
-#ifdef CONFIG_KVM_BOOK3S_PR
- struct kvmppc_book3s_shadow_vcpu *svcpu;
-#endif
__entry->exit_nr = exit_nr;
__entry->pc = kvmppc_get_pc(vcpu);
__entry->dar = kvmppc_get_fault_dar(vcpu);
__entry->msr = vcpu->arch.shared->msr;
#ifdef CONFIG_KVM_BOOK3S_PR
- svcpu = svcpu_get(vcpu);
- __entry->srr1 = svcpu->shadow_srr1;
- svcpu_put(svcpu);
+ __entry->srr1 = vcpu->arch.shadow_srr1;
#endif
__entry->last_inst = vcpu->arch.last_inst;
),
--
1.8.3.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 1/8] KVM: PPC: Book3S PR: Load up SPRG3 register with guest value on guest entry
2013-07-11 11:49 ` [PATCH 1/8] KVM: PPC: Book3S PR: Load up SPRG3 register with guest value on guest entry Paul Mackerras
@ 2013-07-25 13:38 ` Alexander Graf
2013-07-25 13:40 ` Alexander Graf
0 siblings, 1 reply; 15+ messages in thread
From: Alexander Graf @ 2013-07-25 13:38 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 11.07.2013, at 13:49, Paul Mackerras wrote:
> Unlike the other general-purpose SPRs, SPRG3 can be read by usermode
> code, and is used in recent kernels to store the CPU and NUMA node
> numbers so that they can be read by VDSO functions. Thus we need to
> load the guest's SPRG3 value into the real SPRG3 register when entering
> the guest, and restore the host's value when exiting the guest. We don't
> need to save the guest SPRG3 value when exiting the guest as usermode
> code can't modify SPRG3.
This loads SPRG3 on every guest exit, which can happen a lot with instruction emulation. Since the kernel doesn't rely on the contents of SPRG3 we only have to care about it when not in KVM code, right?
So could we move this to kvmppc_core_vcpu_load/put instead?
Alex
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> arch/powerpc/kernel/asm-offsets.c | 1 +
> arch/powerpc/kvm/book3s_interrupts.S | 14 ++++++++++++++
> 2 files changed, 15 insertions(+)
>
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index 6f16ffa..a67c76e 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -452,6 +452,7 @@ int main(void)
> DEFINE(VCPU_SPRG2, offsetof(struct kvm_vcpu, arch.shregs.sprg2));
> DEFINE(VCPU_SPRG3, offsetof(struct kvm_vcpu, arch.shregs.sprg3));
> #endif
> + DEFINE(VCPU_SHARED_SPRG3, offsetof(struct kvm_vcpu_arch_shared, sprg3));
> DEFINE(VCPU_SHARED_SPRG4, offsetof(struct kvm_vcpu_arch_shared, sprg4));
> DEFINE(VCPU_SHARED_SPRG5, offsetof(struct kvm_vcpu_arch_shared, sprg5));
> DEFINE(VCPU_SHARED_SPRG6, offsetof(struct kvm_vcpu_arch_shared, sprg6));
> diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
> index 48cbbf8..17cfae5 100644
> --- a/arch/powerpc/kvm/book3s_interrupts.S
> +++ b/arch/powerpc/kvm/book3s_interrupts.S
> @@ -92,6 +92,11 @@ kvm_start_lightweight:
> PPC_LL r3, VCPU_HFLAGS(r4)
> rldicl r3, r3, 0, 63 /* r3 &= 1 */
> stb r3, HSTATE_RESTORE_HID5(r13)
> +
> + /* Load up guest SPRG3 value, since it's user readable */
> + ld r3, VCPU_SHARED(r4)
> + ld r3, VCPU_SHARED_SPRG3(r3)
> + mtspr SPRN_SPRG3, r3
> #endif /* CONFIG_PPC_BOOK3S_64 */
>
> PPC_LL r4, VCPU_SHADOW_MSR(r4) /* get shadow_msr */
> @@ -123,6 +128,15 @@ kvmppc_handler_highmem:
> /* R7 = vcpu */
> PPC_LL r7, GPR4(r1)
>
> +#ifdef CONFIG_PPC_BOOK3S_64
> + /*
> + * Reload kernel SPRG3 value.
> + * No need to save guest value as usermode can't modify SPRG3.
> + */
> + ld r3, PACA_SPRG3(r13)
> + mtspr SPRN_SPRG3, r3
> +#endif /* CONFIG_PPC_BOOK3S_64 */
> +
> PPC_STL r14, VCPU_GPR(R14)(r7)
> PPC_STL r15, VCPU_GPR(R15)(r7)
> PPC_STL r16, VCPU_GPR(R16)(r7)
> --
> 1.8.3.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/8] KVM: PPC: Book3S PR: Load up SPRG3 register with guest value on guest entry
2013-07-25 13:38 ` Alexander Graf
@ 2013-07-25 13:40 ` Alexander Graf
0 siblings, 0 replies; 15+ messages in thread
From: Alexander Graf @ 2013-07-25 13:40 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 25.07.2013, at 15:38, Alexander Graf wrote:
>
> On 11.07.2013, at 13:49, Paul Mackerras wrote:
>
>> Unlike the other general-purpose SPRs, SPRG3 can be read by usermode
>> code, and is used in recent kernels to store the CPU and NUMA node
>> numbers so that they can be read by VDSO functions. Thus we need to
>> load the guest's SPRG3 value into the real SPRG3 register when entering
>> the guest, and restore the host's value when exiting the guest. We don't
>> need to save the guest SPRG3 value when exiting the guest as usermode
>> code can't modify SPRG3.
>
> This loads SPRG3 on every guest exit, which can happen a lot with instruction emulation. Since the kernel doesn't rely on the contents of SPRG3 we only have to care about it when not in KVM code, right?
>
> So could we move this to kvmppc_core_vcpu_load/put instead?
but then again if all the shadow copy code is negligible performance wise, so is this probably. Applied to kvm-ppc-queue.
Alex
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
2013-07-11 11:50 ` [PATCH 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu Paul Mackerras
2013-07-13 12:21 ` [PATCH v2 " Paul Mackerras
@ 2013-07-25 13:54 ` Alexander Graf
2013-08-03 2:00 ` Paul Mackerras
1 sibling, 1 reply; 15+ messages in thread
From: Alexander Graf @ 2013-07-25 13:54 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 11.07.2013, at 13:50, Paul Mackerras wrote:
> Currently PR-style KVM keeps the volatile guest register values
> (R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
> the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
> places, a kmalloc'd struct and in the PACA, and it gets copied back
> and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
> can't rely on being able to access the kmalloc'd struct.
>
> This changes the code to copy the volatile values into the shadow_vcpu
> as one of the last things done before entering the guest. Similarly
> the values are copied back out of the shadow_vcpu to the kvm_vcpu
> immediately after exiting the guest. We arrange for interrupts to be
> still disabled at this point so that we can't get preempted on 64-bit
> and end up copying values from the wrong PACA.
>
> This means that the accessor functions in kvm_book3s.h for these
> registers are greatly simplified, and are same between PR and HV KVM.
> In places where accesses to shadow_vcpu fields are now replaced by
> accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
> Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
>
> With this, the time to read the PVR one million times in a loop went
> from 478.2ms to 480.1ms (averages of 4 values), a difference which is
> not statistically significant given the variability of the results.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> arch/powerpc/include/asm/kvm_book3s.h | 193 +++++-------------------------
> arch/powerpc/include/asm/kvm_book3s_asm.h | 6 +-
> arch/powerpc/include/asm/kvm_host.h | 1 +
> arch/powerpc/kernel/asm-offsets.c | 3 +-
> arch/powerpc/kvm/book3s_emulate.c | 8 +-
> arch/powerpc/kvm/book3s_interrupts.S | 101 ++++++++++++++++
> arch/powerpc/kvm/book3s_pr.c | 68 +++++------
> arch/powerpc/kvm/book3s_rmhandlers.S | 5 -
> arch/powerpc/kvm/trace.h | 7 +-
> 9 files changed, 175 insertions(+), 217 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 08891d0..5d68f6c 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -198,149 +198,97 @@ extern void kvm_return_point(void);
> #include <asm/kvm_book3s_64.h>
> #endif
>
> -#ifdef CONFIG_KVM_BOOK3S_PR
> -
> -static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
> -{
> - return to_book3s(vcpu)->hior;
> -}
> -
> -static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
> - unsigned long pending_now, unsigned long old_pending)
> -{
> - if (pending_now)
> - vcpu->arch.shared->int_pending = 1;
> - else if (old_pending)
> - vcpu->arch.shared->int_pending = 0;
> -}
> -
> static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
> {
> - if ( num < 14 ) {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - svcpu->gpr[num] = val;
> - svcpu_put(svcpu);
> - to_book3s(vcpu)->shadow_vcpu->gpr[num] = val;
> - } else
> - vcpu->arch.gpr[num] = val;
> + vcpu->arch.gpr[num] = val;
> }
>
> static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
> {
> - if ( num < 14 ) {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - ulong r = svcpu->gpr[num];
> - svcpu_put(svcpu);
> - return r;
> - } else
> - return vcpu->arch.gpr[num];
> + return vcpu->arch.gpr[num];
> }
>
> static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - svcpu->cr = val;
> - svcpu_put(svcpu);
> - to_book3s(vcpu)->shadow_vcpu->cr = val;
> + vcpu->arch.cr = val;
> }
>
> static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - u32 r;
> - r = svcpu->cr;
> - svcpu_put(svcpu);
> - return r;
> + return vcpu->arch.cr;
> }
>
> static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - svcpu->xer = val;
> - to_book3s(vcpu)->shadow_vcpu->xer = val;
> - svcpu_put(svcpu);
> + vcpu->arch.xer = val;
> }
>
> static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - u32 r;
> - r = svcpu->xer;
> - svcpu_put(svcpu);
> - return r;
> + return vcpu->arch.xer;
> }
>
> static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - svcpu->ctr = val;
> - svcpu_put(svcpu);
> + vcpu->arch.ctr = val;
> }
>
> static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - ulong r;
> - r = svcpu->ctr;
> - svcpu_put(svcpu);
> - return r;
> + return vcpu->arch.ctr;
> }
>
> static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - svcpu->lr = val;
> - svcpu_put(svcpu);
> + vcpu->arch.lr = val;
> }
>
> static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - ulong r;
> - r = svcpu->lr;
> - svcpu_put(svcpu);
> - return r;
> + return vcpu->arch.lr;
> }
>
> static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - svcpu->pc = val;
> - svcpu_put(svcpu);
> + vcpu->arch.pc = val;
> }
>
> static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - ulong r;
> - r = svcpu->pc;
> - svcpu_put(svcpu);
> - return r;
> + return vcpu->arch.pc;
> }
>
> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> {
> ulong pc = kvmppc_get_pc(vcpu);
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - u32 r;
>
> /* Load the instruction manually if it failed to do so in the
> * exit path */
> - if (svcpu->last_inst == KVM_INST_FETCH_FAILED)
> - kvmppc_ld(vcpu, &pc, sizeof(u32), &svcpu->last_inst, false);
> + if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
> + kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>
> - r = svcpu->last_inst;
> - svcpu_put(svcpu);
> - return r;
> + return vcpu->arch.last_inst;
> }
>
> static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - ulong r;
> - r = svcpu->fault_dar;
> - svcpu_put(svcpu);
> - return r;
> + return vcpu->arch.fault_dar;
> +}
> +
> +#ifdef CONFIG_KVM_BOOK3S_PR
> +
> +static inline unsigned long kvmppc_interrupt_offset(struct kvm_vcpu *vcpu)
> +{
> + return to_book3s(vcpu)->hior;
> +}
> +
> +static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
> + unsigned long pending_now, unsigned long old_pending)
> +{
> + if (pending_now)
> + vcpu->arch.shared->int_pending = 1;
> + else if (old_pending)
> + vcpu->arch.shared->int_pending = 0;
> }
>
> static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
> @@ -374,83 +322,6 @@ static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
> {
> }
>
> -static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
> -{
> - vcpu->arch.gpr[num] = val;
> -}
> -
> -static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
> -{
> - return vcpu->arch.gpr[num];
> -}
> -
> -static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
> -{
> - vcpu->arch.cr = val;
> -}
> -
> -static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->arch.cr;
> -}
> -
> -static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
> -{
> - vcpu->arch.xer = val;
> -}
> -
> -static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->arch.xer;
> -}
> -
> -static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
> -{
> - vcpu->arch.ctr = val;
> -}
> -
> -static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->arch.ctr;
> -}
> -
> -static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
> -{
> - vcpu->arch.lr = val;
> -}
> -
> -static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->arch.lr;
> -}
> -
> -static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
> -{
> - vcpu->arch.pc = val;
> -}
> -
> -static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->arch.pc;
> -}
> -
> -static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> -{
> - ulong pc = kvmppc_get_pc(vcpu);
> -
> - /* Load the instruction manually if it failed to do so in the
> - * exit path */
> - if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
> - kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
> -
> - return vcpu->arch.last_inst;
> -}
> -
> -static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->arch.fault_dar;
> -}
> -
> static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
> {
> return false;
> diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
> index 9039d3c..4141409 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_asm.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
> @@ -108,14 +108,14 @@ struct kvmppc_book3s_shadow_vcpu {
> ulong gpr[14];
> u32 cr;
> u32 xer;
> -
> - u32 fault_dsisr;
> - u32 last_inst;
> ulong ctr;
> ulong lr;
> ulong pc;
> +
> ulong shadow_srr1;
> ulong fault_dar;
> + u32 fault_dsisr;
> + u32 last_inst;
>
> #ifdef CONFIG_PPC_BOOK3S_32
> u32 sr[16]; /* Guest SRs */
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index 3328353..7b26395 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -463,6 +463,7 @@ struct kvm_vcpu_arch {
> u32 ctrl;
> ulong dabr;
> ulong cfar;
> + ulong shadow_srr1;
> #endif
> u32 vrsave; /* also USPRG0 */
> u32 mmucr;
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index a67c76e..936d7cf 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -462,6 +462,7 @@ int main(void)
> DEFINE(VCPU_SHARED, offsetof(struct kvm_vcpu, arch.shared));
> DEFINE(VCPU_SHARED_MSR, offsetof(struct kvm_vcpu_arch_shared, msr));
> DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
> + DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
>
> DEFINE(VCPU_SHARED_MAS0, offsetof(struct kvm_vcpu_arch_shared, mas0));
> DEFINE(VCPU_SHARED_MAS1, offsetof(struct kvm_vcpu_arch_shared, mas1));
> @@ -519,8 +520,6 @@ int main(void)
> DEFINE(VCORE_NAP_COUNT, offsetof(struct kvmppc_vcore, nap_count));
> DEFINE(VCORE_IN_GUEST, offsetof(struct kvmppc_vcore, in_guest));
> DEFINE(VCORE_NAPPING_THREADS, offsetof(struct kvmppc_vcore, napping_threads));
> - DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
> - offsetof(struct kvmppc_vcpu_book3s, vcpu));
> DEFINE(VCPU_SLB_E, offsetof(struct kvmppc_slb, orige));
> DEFINE(VCPU_SLB_V, offsetof(struct kvmppc_slb, origv));
> DEFINE(VCPU_SLB_SIZE, sizeof(struct kvmppc_slb));
> diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> index 360ce68..34044b1 100644
> --- a/arch/powerpc/kvm/book3s_emulate.c
> +++ b/arch/powerpc/kvm/book3s_emulate.c
> @@ -267,12 +267,9 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
>
> r = kvmppc_st(vcpu, &addr, 32, zeros, true);
> if ((r == -ENOENT) || (r == -EPERM)) {
> - struct kvmppc_book3s_shadow_vcpu *svcpu;
> -
> - svcpu = svcpu_get(vcpu);
> *advance = 0;
> vcpu->arch.shared->dar = vaddr;
> - svcpu->fault_dar = vaddr;
> + vcpu->arch.fault_dar = vaddr;
>
> dsisr = DSISR_ISSTORE;
> if (r == -ENOENT)
> @@ -281,8 +278,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
> dsisr |= DSISR_PROTFAULT;
>
> vcpu->arch.shared->dsisr = dsisr;
> - svcpu->fault_dsisr = dsisr;
> - svcpu_put(svcpu);
> + vcpu->arch.fault_dsisr = dsisr;
>
> kvmppc_book3s_queue_irqprio(vcpu,
> BOOK3S_INTERRUPT_DATA_STORAGE);
> diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
> index 17cfae5..c935195 100644
> --- a/arch/powerpc/kvm/book3s_interrupts.S
> +++ b/arch/powerpc/kvm/book3s_interrupts.S
> @@ -26,8 +26,12 @@
>
> #if defined(CONFIG_PPC_BOOK3S_64)
> #define FUNC(name) GLUE(.,name)
> +#define GET_SHADOW_VCPU(reg) mr reg, r13
Is this correct?
> +
> #elif defined(CONFIG_PPC_BOOK3S_32)
> #define FUNC(name) name
> +#define GET_SHADOW_VCPU(reg) lwz reg, (THREAD + THREAD_KVM_SVCPU)(r2)
> +
> #endif /* CONFIG_PPC_BOOK3S_XX */
>
> #define VCPU_LOAD_NVGPRS(vcpu) \
> @@ -87,8 +91,49 @@ kvm_start_entry:
> VCPU_LOAD_NVGPRS(r4)
>
> kvm_start_lightweight:
> + /* Copy registers into shadow vcpu so we can access them in real mode */
> + GET_SHADOW_VCPU(r3)
> + PPC_LL r5, VCPU_GPR(R0)(r4)
> + PPC_LL r6, VCPU_GPR(R1)(r4)
> + PPC_LL r7, VCPU_GPR(R2)(r4)
> + PPC_LL r8, VCPU_GPR(R3)(r4)
> + PPC_STL r5, SVCPU_R0(r3)
> + PPC_STL r6, SVCPU_R1(r3)
> + PPC_STL r7, SVCPU_R2(r3)
> + PPC_STL r8, SVCPU_R3(r3)
> + PPC_LL r5, VCPU_GPR(R4)(r4)
> + PPC_LL r6, VCPU_GPR(R5)(r4)
> + PPC_LL r7, VCPU_GPR(R6)(r4)
> + PPC_LL r8, VCPU_GPR(R7)(r4)
> + PPC_STL r5, SVCPU_R4(r3)
> + PPC_STL r6, SVCPU_R5(r3)
> + PPC_STL r7, SVCPU_R6(r3)
> + PPC_STL r8, SVCPU_R7(r3)
> + PPC_LL r5, VCPU_GPR(R8)(r4)
> + PPC_LL r6, VCPU_GPR(R9)(r4)
> + PPC_LL r7, VCPU_GPR(R10)(r4)
> + PPC_LL r8, VCPU_GPR(R11)(r4)
> + PPC_STL r5, SVCPU_R8(r3)
> + PPC_STL r6, SVCPU_R9(r3)
> + PPC_STL r7, SVCPU_R10(r3)
> + PPC_STL r8, SVCPU_R11(r3)
> + PPC_LL r5, VCPU_GPR(R12)(r4)
> + PPC_LL r6, VCPU_GPR(R13)(r4)
> + lwz r7, VCPU_CR(r4)
> + PPC_LL r8, VCPU_XER(r4)
> + PPC_STL r5, SVCPU_R12(r3)
> + PPC_STL r6, SVCPU_R13(r3)
> + stw r7, SVCPU_CR(r3)
> + stw r8, SVCPU_XER(r3)
> + PPC_LL r5, VCPU_CTR(r4)
> + PPC_LL r6, VCPU_LR(r4)
> + PPC_LL r7, VCPU_PC(r4)
> + PPC_STL r5, SVCPU_CTR(r3)
> + PPC_STL r6, SVCPU_LR(r3)
> + PPC_STL r7, SVCPU_PC(r3)
Can we put this and the reverse copy into C functions and just call them from here and below? It'd definitely improve readability. We could even skip the whole thing on systems where we're not limited by an RMA.
>
> #ifdef CONFIG_PPC_BOOK3S_64
> + /* Get the dcbz32 flag */
Hrm :)
> PPC_LL r3, VCPU_HFLAGS(r4)
> rldicl r3, r3, 0, 63 /* r3 &= 1 */
> stb r3, HSTATE_RESTORE_HID5(r13)
> @@ -128,6 +173,61 @@ kvmppc_handler_highmem:
> /* R7 = vcpu */
> PPC_LL r7, GPR4(r1)
>
> + /* Transfer reg values from shadow vcpu back to vcpu struct */
> + /* On 64-bit, interrupts are still off at this point */
> + GET_SHADOW_VCPU(r4)
> + PPC_LL r5, SVCPU_R0(r4)
> + PPC_LL r6, SVCPU_R1(r4)
> + PPC_LL r3, SVCPU_R2(r4)
> + PPC_LL r8, SVCPU_R3(r4)
> + PPC_STL r5, VCPU_GPR(R0)(r7)
> + PPC_STL r6, VCPU_GPR(R1)(r7)
> + PPC_STL r3, VCPU_GPR(R2)(r7)
> + PPC_STL r8, VCPU_GPR(R3)(r7)
> + PPC_LL r5, SVCPU_R4(r4)
> + PPC_LL r6, SVCPU_R5(r4)
> + PPC_LL r3, SVCPU_R6(r4)
> + PPC_LL r8, SVCPU_R7(r4)
> + PPC_STL r5, VCPU_GPR(R4)(r7)
> + PPC_STL r6, VCPU_GPR(R5)(r7)
> + PPC_STL r3, VCPU_GPR(R6)(r7)
> + PPC_STL r8, VCPU_GPR(R7)(r7)
> + PPC_LL r5, SVCPU_R8(r4)
> + PPC_LL r6, SVCPU_R9(r4)
> + PPC_LL r3, SVCPU_R10(r4)
> + PPC_LL r8, SVCPU_R11(r4)
> + PPC_STL r5, VCPU_GPR(R8)(r7)
> + PPC_STL r6, VCPU_GPR(R9)(r7)
> + PPC_STL r3, VCPU_GPR(R10)(r7)
> + PPC_STL r8, VCPU_GPR(R11)(r7)
> + PPC_LL r5, SVCPU_R12(r4)
> + PPC_LL r6, SVCPU_R13(r4)
> + lwz r3, SVCPU_CR(r4)
> + lwz r8, SVCPU_XER(r4)
> + PPC_STL r5, VCPU_GPR(R12)(r7)
> + PPC_STL r6, VCPU_GPR(R13)(r7)
> + stw r3, VCPU_CR(r7)
> + PPC_STL r8, VCPU_XER(r7)
> + PPC_LL r5, SVCPU_CTR(r4)
> + PPC_LL r6, SVCPU_LR(r4)
> + PPC_LL r3, SVCPU_PC(r4)
> + PPC_LL r8, SVCPU_SHADOW_SRR1(r4)
> + PPC_STL r5, VCPU_CTR(r7)
> + PPC_STL r6, VCPU_LR(r7)
> + PPC_STL r3, VCPU_PC(r7)
> + PPC_STL r8, VCPU_SHADOW_SRR1(r7)
> + PPC_LL r5, SVCPU_FAULT_DAR(r4)
> + lwz r6, SVCPU_FAULT_DSISR(r4)
> + lwz r3, SVCPU_LAST_INST(r4)
> + PPC_STL r5, VCPU_FAULT_DAR(r7)
> + stw r6, VCPU_FAULT_DSISR(r7)
> + stw r3, VCPU_LAST_INST(r7)
> +
> + /* Re-enable interrupts */
> + mfmsr r3
> + ori r3, r3, MSR_EE
> + MTMSR_EERI(r3)
> +
> #ifdef CONFIG_PPC_BOOK3S_64
> /*
> * Reload kernel SPRG3 value.
> @@ -135,6 +235,7 @@ kvmppc_handler_highmem:
> */
> ld r3, PACA_SPRG3(r13)
> mtspr SPRN_SPRG3, r3
> +
> #endif /* CONFIG_PPC_BOOK3S_64 */
>
> PPC_STL r14, VCPU_GPR(R14)(r7)
> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
> index ddfaf56..5aa64e2 100644
> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -61,8 +61,6 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> #ifdef CONFIG_PPC_BOOK3S_64
> struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb));
> - memcpy(&get_paca()->shadow_vcpu, to_book3s(vcpu)->shadow_vcpu,
> - sizeof(get_paca()->shadow_vcpu));
> svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max;
> svcpu_put(svcpu);
> #endif
> @@ -77,8 +75,6 @@ void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
> #ifdef CONFIG_PPC_BOOK3S_64
> struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb));
> - memcpy(to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
> - sizeof(get_paca()->shadow_vcpu));
> to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max;
> svcpu_put(svcpu);
> #endif
> @@ -388,22 +384,18 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
>
> if (page_found == -ENOENT) {
> /* Page not found in guest PTE entries */
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> vcpu->arch.shared->dar = kvmppc_get_fault_dar(vcpu);
> - vcpu->arch.shared->dsisr = svcpu->fault_dsisr;
> + vcpu->arch.shared->dsisr = vcpu->arch.fault_dsisr;
> vcpu->arch.shared->msr |=
> - (svcpu->shadow_srr1 & 0x00000000f8000000ULL);
> - svcpu_put(svcpu);
> + vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL;
> kvmppc_book3s_queue_irqprio(vcpu, vec);
> } else if (page_found == -EPERM) {
> /* Storage protection */
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> vcpu->arch.shared->dar = kvmppc_get_fault_dar(vcpu);
> - vcpu->arch.shared->dsisr = svcpu->fault_dsisr & ~DSISR_NOHPTE;
> + vcpu->arch.shared->dsisr = vcpu->arch.fault_dsisr & ~DSISR_NOHPTE;
> vcpu->arch.shared->dsisr |= DSISR_PROTFAULT;
> vcpu->arch.shared->msr |=
> - svcpu->shadow_srr1 & 0x00000000f8000000ULL;
> - svcpu_put(svcpu);
> + vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL;
> kvmppc_book3s_queue_irqprio(vcpu, vec);
> } else if (page_found == -EINVAL) {
> /* Page not found in guest SLB */
> @@ -623,21 +615,26 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
> switch (exit_nr) {
> case BOOK3S_INTERRUPT_INST_STORAGE:
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - ulong shadow_srr1 = svcpu->shadow_srr1;
> + ulong shadow_srr1 = vcpu->arch.shadow_srr1;
> vcpu->stat.pf_instruc++;
>
> #ifdef CONFIG_PPC_BOOK3S_32
> /* We set segments as unused segments when invalidating them. So
> * treat the respective fault as segment fault. */
> - if (svcpu->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT] == SR_INVALID) {
> - kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
> - r = RESUME_GUEST;
> + {
> + struct kvmppc_book3s_shadow_vcpu *svcpu;
> + u32 sr;
> +
> + svcpu = svcpu_get(vcpu);
> + sr = svcpu->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT];
> svcpu_put(svcpu);
> - break;
> + if (sr == SR_INVALID) {
> + kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
> + r = RESUME_GUEST;
> + break;
> + }
> }
> #endif
> - svcpu_put(svcpu);
>
> /* only care about PTEG not found errors, but leave NX alone */
> if (shadow_srr1 & 0x40000000) {
> @@ -662,21 +659,26 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
> case BOOK3S_INTERRUPT_DATA_STORAGE:
> {
> ulong dar = kvmppc_get_fault_dar(vcpu);
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - u32 fault_dsisr = svcpu->fault_dsisr;
> + u32 fault_dsisr = vcpu->arch.fault_dsisr;
> vcpu->stat.pf_storage++;
>
> #ifdef CONFIG_PPC_BOOK3S_32
> /* We set segments as unused segments when invalidating them. So
> * treat the respective fault as segment fault. */
> - if ((svcpu->sr[dar >> SID_SHIFT]) == SR_INVALID) {
Why are we even using a shadow vcpu on book3s_32? We can always just access the real vcpu, no? Hrm.
I think it makes sense to get rid of 99% of the svcpu logic. Remove all bits in C code that ever refer to an svcpu. Only use the vcpu as reference for everything. Until you hit the real mode barrier on book3s_64. There explicitly copy the few things we need in real mode into the PACA and only refer to the PACA in rm code.
It'd be nice if we could speed up that code path on systems that don't need to jump through hoops to get to their vcpu data (G5s, PowerStation, etc), but I don't think it's worth the added complexity. We should rather try to make the code easy to understand :).
Alex
> - kvmppc_mmu_map_segment(vcpu, dar);
> - r = RESUME_GUEST;
> + {
> + struct kvmppc_book3s_shadow_vcpu *svcpu;
> + u32 sr;
> +
> + svcpu = svcpu_get(vcpu);
> + sr = svcpu->sr[dar >> SID_SHIFT];
> svcpu_put(svcpu);
> - break;
> + if (sr == SR_INVALID) {
> + kvmppc_mmu_map_segment(vcpu, dar);
> + r = RESUME_GUEST;
> + break;
> + }
> }
> #endif
> - svcpu_put(svcpu);
>
> /* The only case we need to handle is missing shadow PTEs */
> if (fault_dsisr & DSISR_NOHPTE) {
> @@ -723,13 +725,10 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
> case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
> {
> enum emulation_result er;
> - struct kvmppc_book3s_shadow_vcpu *svcpu;
> ulong flags;
>
> program_interrupt:
> - svcpu = svcpu_get(vcpu);
> - flags = svcpu->shadow_srr1 & 0x1f0000ull;
> - svcpu_put(svcpu);
> + flags = vcpu->arch.shadow_srr1 & 0x1f0000ull;
>
> if (vcpu->arch.shared->msr & MSR_PR) {
> #ifdef EXIT_DEBUG
> @@ -861,9 +860,7 @@ program_interrupt:
> break;
> default:
> {
> - struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
> - ulong shadow_srr1 = svcpu->shadow_srr1;
> - svcpu_put(svcpu);
> + ulong shadow_srr1 = vcpu->arch.shadow_srr1;
> /* Ugh - bork here! What did we get? */
> printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | msr=0x%lx\n",
> exit_nr, kvmppc_get_pc(vcpu), shadow_srr1);
> @@ -1037,11 +1034,12 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
> if (!vcpu_book3s)
> goto out;
>
> +#ifdef CONFIG_KVM_BOOK3S_32
> vcpu_book3s->shadow_vcpu =
> kzalloc(sizeof(*vcpu_book3s->shadow_vcpu), GFP_KERNEL);
> if (!vcpu_book3s->shadow_vcpu)
> goto free_vcpu;
> -
> +#endif
> vcpu = &vcpu_book3s->vcpu;
> err = kvm_vcpu_init(vcpu, kvm, id);
> if (err)
> @@ -1074,8 +1072,10 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
> uninit_vcpu:
> kvm_vcpu_uninit(vcpu);
> free_shadow_vcpu:
> +#ifdef CONFIG_KVM_BOOK3S_32
> kfree(vcpu_book3s->shadow_vcpu);
> free_vcpu:
> +#endif
> vfree(vcpu_book3s);
> out:
> return ERR_PTR(err);
> diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
> index 8f7633e..b64d7f9 100644
> --- a/arch/powerpc/kvm/book3s_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_rmhandlers.S
> @@ -179,11 +179,6 @@ _GLOBAL(kvmppc_entry_trampoline)
>
> li r6, MSR_IR | MSR_DR
> andc r6, r5, r6 /* Clear DR and IR in MSR value */
> - /*
> - * Set EE in HOST_MSR so that it's enabled when we get into our
> - * C exit handler function
> - */
> - ori r5, r5, MSR_EE
> mtsrr0 r7
> mtsrr1 r6
> RFI
> diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
> index e326489..a088e9a 100644
> --- a/arch/powerpc/kvm/trace.h
> +++ b/arch/powerpc/kvm/trace.h
> @@ -101,17 +101,12 @@ TRACE_EVENT(kvm_exit,
> ),
>
> TP_fast_assign(
> -#ifdef CONFIG_KVM_BOOK3S_PR
> - struct kvmppc_book3s_shadow_vcpu *svcpu;
> -#endif
> __entry->exit_nr = exit_nr;
> __entry->pc = kvmppc_get_pc(vcpu);
> __entry->dar = kvmppc_get_fault_dar(vcpu);
> __entry->msr = vcpu->arch.shared->msr;
> #ifdef CONFIG_KVM_BOOK3S_PR
> - svcpu = svcpu_get(vcpu);
> - __entry->srr1 = svcpu->shadow_srr1;
> - svcpu_put(svcpu);
> + __entry->srr1 = vcpu->arch.shadow_srr1;
> #endif
> __entry->last_inst = vcpu->arch.last_inst;
> ),
> --
> 1.8.3.1
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
2013-07-25 13:54 ` [PATCH " Alexander Graf
@ 2013-08-03 2:00 ` Paul Mackerras
0 siblings, 0 replies; 15+ messages in thread
From: Paul Mackerras @ 2013-08-03 2:00 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
On Thu, Jul 25, 2013 at 03:54:17PM +0200, Alexander Graf wrote:
>
> On 11.07.2013, at 13:50, Paul Mackerras wrote:
>
> > diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
> > index 17cfae5..c935195 100644
> > --- a/arch/powerpc/kvm/book3s_interrupts.S
> > +++ b/arch/powerpc/kvm/book3s_interrupts.S
> > @@ -26,8 +26,12 @@
> >
> > #if defined(CONFIG_PPC_BOOK3S_64)
> > #define FUNC(name) GLUE(.,name)
> > +#define GET_SHADOW_VCPU(reg) mr reg, r13
>
> Is this correct?
Yes, it's just a copy of what's in book3s_segment.S already. For
64-bit the shadow vcpu is in the PACA (and the offsets defined in
asm-offsets.c are expressed as offsets from the PACA).
> > kvm_start_lightweight:
> > + /* Copy registers into shadow vcpu so we can access them in real mode */
> > + GET_SHADOW_VCPU(r3)
> > + PPC_LL r5, VCPU_GPR(R0)(r4)
> > + PPC_LL r6, VCPU_GPR(R1)(r4)
> > + PPC_LL r7, VCPU_GPR(R2)(r4)
> > + PPC_LL r8, VCPU_GPR(R3)(r4)
> > + PPC_STL r5, SVCPU_R0(r3)
> > + PPC_STL r6, SVCPU_R1(r3)
> > + PPC_STL r7, SVCPU_R2(r3)
> > + PPC_STL r8, SVCPU_R3(r3)
> > + PPC_LL r5, VCPU_GPR(R4)(r4)
> > + PPC_LL r6, VCPU_GPR(R5)(r4)
> > + PPC_LL r7, VCPU_GPR(R6)(r4)
> > + PPC_LL r8, VCPU_GPR(R7)(r4)
> > + PPC_STL r5, SVCPU_R4(r3)
> > + PPC_STL r6, SVCPU_R5(r3)
> > + PPC_STL r7, SVCPU_R6(r3)
> > + PPC_STL r8, SVCPU_R7(r3)
> > + PPC_LL r5, VCPU_GPR(R8)(r4)
> > + PPC_LL r6, VCPU_GPR(R9)(r4)
> > + PPC_LL r7, VCPU_GPR(R10)(r4)
> > + PPC_LL r8, VCPU_GPR(R11)(r4)
> > + PPC_STL r5, SVCPU_R8(r3)
> > + PPC_STL r6, SVCPU_R9(r3)
> > + PPC_STL r7, SVCPU_R10(r3)
> > + PPC_STL r8, SVCPU_R11(r3)
> > + PPC_LL r5, VCPU_GPR(R12)(r4)
> > + PPC_LL r6, VCPU_GPR(R13)(r4)
> > + lwz r7, VCPU_CR(r4)
> > + PPC_LL r8, VCPU_XER(r4)
> > + PPC_STL r5, SVCPU_R12(r3)
> > + PPC_STL r6, SVCPU_R13(r3)
> > + stw r7, SVCPU_CR(r3)
> > + stw r8, SVCPU_XER(r3)
> > + PPC_LL r5, VCPU_CTR(r4)
> > + PPC_LL r6, VCPU_LR(r4)
> > + PPC_LL r7, VCPU_PC(r4)
> > + PPC_STL r5, SVCPU_CTR(r3)
> > + PPC_STL r6, SVCPU_LR(r3)
> > + PPC_STL r7, SVCPU_PC(r3)
>
> Can we put this and the reverse copy into C functions and just call them from here and below? It'd definitely improve readability. We could even skip the whole thing on systems where we're not limited by an RMA.
Sure, will do.
> Why are we even using a shadow vcpu on book3s_32? We can always just access the real vcpu, no? Hrm.
At this stage the vcpu is still part of the vcpu_book3s struct, which
is vmalloc'd, so no we can't access it in real mode. However, I have
a patch series which I'm about to post that will make the vcpu be
kmalloc'd, and then 32-bit could access it directly in real mode, as
could 64-bit bare-metal platforms if that made sense.
> I think it makes sense to get rid of 99% of the svcpu logic. Remove all bits in C code that ever refer to an svcpu. Only use the vcpu as reference for everything. Until you hit the real mode barrier on book3s_64. There explicitly copy the few things we need in real mode into the PACA and only refer to the PACA in rm code.
>
> It'd be nice if we could speed up that code path on systems that don't need to jump through hoops to get to their vcpu data (G5s, PowerStation, etc), but I don't think it's worth the added complexity. We should rather try to make the code easy to understand :).
I agree completely. :)
Paul.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-08-03 2:00 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-11 11:48 [PATCH 0/8] PR KVM fixes and improvements Paul Mackerras
2013-07-11 11:49 ` [PATCH 1/8] KVM: PPC: Book3S PR: Load up SPRG3 register with guest value on guest entry Paul Mackerras
2013-07-25 13:38 ` Alexander Graf
2013-07-25 13:40 ` Alexander Graf
2013-07-11 11:50 ` [PATCH 2/8] KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu Paul Mackerras
2013-07-13 12:21 ` [PATCH v2 " Paul Mackerras
2013-07-25 13:54 ` [PATCH " Alexander Graf
2013-08-03 2:00 ` Paul Mackerras
2013-07-11 11:51 ` [PATCH 3/8] KVM: PPC: Book3S PR: Rework kvmppc_mmu_book3s_64_xlate() Paul Mackerras
2013-07-11 11:52 ` [PATCH 4/8] KVM: PPC: Book3S PR: Allow guest to use 64k pages Paul Mackerras
2013-07-11 11:53 ` [PATCH 5/8] KVM: PPC: Book3S PR: Use 64k host pages where possible Paul Mackerras
2013-07-11 11:53 ` [PATCH 6/8] KVM: PPC: Book3S PR: Handle PP0 page-protection bit in guest HPTEs Paul Mackerras
2013-07-11 11:54 ` [PATCH 7/8] KVM: PPC: Book3S PR: Correct errors in H_ENTER implementation Paul Mackerras
2013-07-11 11:55 ` [PATCH 8/8] KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe Paul Mackerras
2013-07-12 1:59 ` [PATCH v2 " Paul Mackerras
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).