* [PATCH 00/11] HV KVM improvements in preparation for POWER8 support
@ 2013-09-06 3:10 Paul Mackerras
2013-09-06 3:11 ` [PATCH 01/11] KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers Paul Mackerras
` (11 more replies)
0 siblings, 12 replies; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:10 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
This series of patches is based on Alex Graf's kvm-ppc-queue branch.
It fixes some bugs, makes some more registers accessible through the
one_reg interface, and implements some missing features such as
support for the compatibility modes in recent POWER cpus and support
for the guest having a different timebase origin from the host.
These patches are all useful on POWER7 and will be needed for good
POWER8 support.
Please apply.
Documentation/virtual/kvm/api.txt | 5 +
arch/powerpc/include/asm/exception-64s.h | 8 +
arch/powerpc/include/asm/kvm_book3s_asm.h | 1 +
arch/powerpc/include/asm/kvm_host.h | 6 +
arch/powerpc/include/asm/reg.h | 14 +
arch/powerpc/include/uapi/asm/kvm.h | 10 +
arch/powerpc/kernel/asm-offsets.c | 6 +
arch/powerpc/kvm/book3s.c | 10 +
arch/powerpc/kvm/book3s_hv.c | 96 +++++-
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 544 +++++++++++++++++-------------
10 files changed, 469 insertions(+), 231 deletions(-)
Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH 01/11] KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
@ 2013-09-06 3:11 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:17 ` [PATCH 02/11] KVM: PPC: Book3S HV: Implement timebase offset for guests Paul Mackerras
` (10 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:11 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
Currently we are not saving and restoring the SIAR and SDAR registers in
the PMU (performance monitor unit) on guest entry and exit. The result
is that performance monitoring tools in the guest could get false
information about where a program was executing and what data it was
accessing at the time of a performance monitor interrupt. This fixes
it by saving and restoring these registers along with the other PMU
registers on guest entry/exit.
This also provides a way for userspace to access these values for a
vcpu via the one_reg interface.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/include/asm/kvm_host.h | 2 ++
arch/powerpc/kernel/asm-offsets.c | 2 ++
arch/powerpc/kvm/book3s_hv.c | 12 ++++++++++++
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8 ++++++++
4 files changed, 24 insertions(+)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 3328353..91b833d 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -498,6 +498,8 @@ struct kvm_vcpu_arch {
u64 mmcr[3];
u32 pmc[8];
+ u64 siar;
+ u64 sdar;
#ifdef CONFIG_KVM_EXIT_TIMING
struct mutex exit_timing_lock;
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 26098c2..6a7916d 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -505,6 +505,8 @@ int main(void)
DEFINE(VCPU_PRODDED, offsetof(struct kvm_vcpu, arch.prodded));
DEFINE(VCPU_MMCR, offsetof(struct kvm_vcpu, arch.mmcr));
DEFINE(VCPU_PMC, offsetof(struct kvm_vcpu, arch.pmc));
+ DEFINE(VCPU_SIAR, offsetof(struct kvm_vcpu, arch.siar));
+ DEFINE(VCPU_SDAR, offsetof(struct kvm_vcpu, arch.sdar));
DEFINE(VCPU_SLB, offsetof(struct kvm_vcpu, arch.slb));
DEFINE(VCPU_SLB_MAX, offsetof(struct kvm_vcpu, arch.slb_max));
DEFINE(VCPU_SLB_NR, offsetof(struct kvm_vcpu, arch.slb_nr));
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 8aadd23..29bdeca2 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -749,6 +749,12 @@ int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
i = id - KVM_REG_PPC_PMC1;
*val = get_reg_val(id, vcpu->arch.pmc[i]);
break;
+ case KVM_REG_PPC_SIAR:
+ *val = get_reg_val(id, vcpu->arch.siar);
+ break;
+ case KVM_REG_PPC_SDAR:
+ *val = get_reg_val(id, vcpu->arch.sdar);
+ break;
#ifdef CONFIG_VSX
case KVM_REG_PPC_FPR0 ... KVM_REG_PPC_FPR31:
if (cpu_has_feature(CPU_FTR_VSX)) {
@@ -833,6 +839,12 @@ int kvmppc_set_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
i = id - KVM_REG_PPC_PMC1;
vcpu->arch.pmc[i] = set_reg_val(id, *val);
break;
+ case KVM_REG_PPC_SIAR:
+ vcpu->arch.siar = set_reg_val(id, *val);
+ break;
+ case KVM_REG_PPC_SDAR:
+ vcpu->arch.sdar = set_reg_val(id, *val);
+ break;
#ifdef CONFIG_VSX
case KVM_REG_PPC_FPR0 ... KVM_REG_PPC_FPR31:
if (cpu_has_feature(CPU_FTR_VSX)) {
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 60dce5b..bfb4b0a 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -196,8 +196,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
ld r3, VCPU_MMCR(r4)
ld r5, VCPU_MMCR + 8(r4)
ld r6, VCPU_MMCR + 16(r4)
+ ld r7, VCPU_SIAR(r4)
+ ld r8, VCPU_SDAR(r4)
mtspr SPRN_MMCR1, r5
mtspr SPRN_MMCRA, r6
+ mtspr SPRN_SIAR, r7
+ mtspr SPRN_SDAR, r8
mtspr SPRN_MMCR0, r3
isync
@@ -1122,9 +1126,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
std r3, VCPU_MMCR(r9) /* if not, set saved MMCR0 to FC */
b 22f
21: mfspr r5, SPRN_MMCR1
+ mfspr r7, SPRN_SIAR
+ mfspr r8, SPRN_SDAR
std r4, VCPU_MMCR(r9)
std r5, VCPU_MMCR + 8(r9)
std r6, VCPU_MMCR + 16(r9)
+ std r7, VCPU_SIAR(r9)
+ std r8, VCPU_SDAR(r9)
mfspr r3, SPRN_PMC1
mfspr r4, SPRN_PMC2
mfspr r5, SPRN_PMC3
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 02/11] KVM: PPC: Book3S HV: Implement timebase offset for guests
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
2013-09-06 3:11 ` [PATCH 01/11] KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers Paul Mackerras
@ 2013-09-06 3:17 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:18 ` [PATCH 03/11] KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVE Paul Mackerras
` (9 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:17 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
This allows guests to have a different timebase origin from the host.
This is needed for migration, where a guest can migrate from one host
to another and the two hosts might have a different timebase origin.
However, the timebase seen by the guest must not go backwards, and
should go forwards only by a small amount corresponding to the time
taken for the migration.
Therefore this provides a new per-vcpu value accessed via the one_reg
interface using the new KVM_REG_PPC_TB_OFFSET identifier. This value
defaults to 0 and is not modified by KVM. On entering the guest, this
value is added onto the timebase, and on exiting the guest, it is
subtracted from the timebase.
This is only supported for recent POWER hardware which has the TBU40
(timebase upper 40 bits) register. Writing to the TBU40 register only
alters the upper 40 bits of the timebase, leaving the lower 24 bits
unchanged. This provides a way to modify the timebase for guest
migration without disturbing the synchronization of the timebase
registers across CPU cores. The kernel rounds up the value given
to a multiple of 2^24.
Timebase values stored in KVM structures (struct kvm_vcpu, struct
kvmppc_vcore, etc.) are stored as host timebase values. The timebase
values in the dispatch trace log need to be guest timebase values,
however, since that is read directly by the guest. This moves the
setting of vcpu->arch.dec_expires on guest exit to a point after we
have restored the host timebase so that vcpu->arch.dec_expires is a
host timebase value.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
This differs from the previous version of this patch in that the value
given to the set_one_reg interface is rounded up, as suggested by David
Gibson, rather than truncated.
Documentation/virtual/kvm/api.txt | 1 +
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/include/asm/reg.h | 1 +
arch/powerpc/include/uapi/asm/kvm.h | 3 ++
arch/powerpc/kernel/asm-offsets.c | 1 +
arch/powerpc/kvm/book3s_hv.c | 10 ++++++-
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 50 +++++++++++++++++++++++++++------
7 files changed, 57 insertions(+), 10 deletions(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 341058c..9486e5a 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1810,6 +1810,7 @@ registers, find a list below:
PPC | KVM_REG_PPC_TLB3PS | 32
PPC | KVM_REG_PPC_EPTCFG | 32
PPC | KVM_REG_PPC_ICP_STATE | 64
+ PPC | KVM_REG_PPC_TB_OFFSET | 64
PPC | KVM_REG_PPC_SPMC1 | 32
PPC | KVM_REG_PPC_SPMC2 | 32
PPC | KVM_REG_PPC_IAMR | 64
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 91b833d..9741bf0 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -294,6 +294,7 @@ struct kvmppc_vcore {
u64 stolen_tb;
u64 preempt_tb;
struct kvm_vcpu *runner;
+ u64 tb_offset; /* guest timebase - host timebase */
};
#define VCORE_ENTRY_COUNT(vc) ((vc)->entry_exit_count & 0xff)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 5d7d9c2..342e4ea 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -243,6 +243,7 @@
#define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */
#define SPRN_TBWL 0x11C /* Time Base Lower Register (super, R/W) */
#define SPRN_TBWU 0x11D /* Time Base Upper Register (super, R/W) */
+#define SPRN_TBU40 0x11E /* Timebase upper 40 bits (hyper, R/W) */
#define SPRN_SPURR 0x134 /* Scaled PURR */
#define SPRN_HSPRG0 0x130 /* Hypervisor Scratch 0 */
#define SPRN_HSPRG1 0x131 /* Hypervisor Scratch 1 */
diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
index 7ed41c0..a8124fe 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -504,6 +504,9 @@ struct kvm_get_htab_header {
#define KVM_REG_PPC_TLB3PS (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x9a)
#define KVM_REG_PPC_EPTCFG (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x9b)
+/* Timebase offset */
+#define KVM_REG_PPC_TB_OFFSET (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x9c)
+
/* POWER8 registers */
#define KVM_REG_PPC_SPMC1 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x9d)
#define KVM_REG_PPC_SPMC2 (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0x9e)
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 6a7916d..ccb42cd 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -520,6 +520,7 @@ int main(void)
DEFINE(VCORE_NAP_COUNT, offsetof(struct kvmppc_vcore, nap_count));
DEFINE(VCORE_IN_GUEST, offsetof(struct kvmppc_vcore, in_guest));
DEFINE(VCORE_NAPPING_THREADS, offsetof(struct kvmppc_vcore, napping_threads));
+ DEFINE(VCORE_TB_OFFSET, offsetof(struct kvmppc_vcore, tb_offset));
DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
offsetof(struct kvmppc_vcpu_book3s, vcpu));
DEFINE(VCPU_SLB_E, offsetof(struct kvmppc_slb, orige));
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 29bdeca2..b930caf 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -489,7 +489,7 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
memset(dt, 0, sizeof(struct dtl_entry));
dt->dispatch_reason = 7;
dt->processor_id = vc->pcpu + vcpu->arch.ptid;
- dt->timebase = now;
+ dt->timebase = now + vc->tb_offset;
dt->enqueue_to_dispatch_time = stolen;
dt->srr0 = kvmppc_get_pc(vcpu);
dt->srr1 = vcpu->arch.shregs.msr;
@@ -793,6 +793,9 @@ int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
val->vpaval.length = vcpu->arch.dtl.len;
spin_unlock(&vcpu->arch.vpa_update_lock);
break;
+ case KVM_REG_PPC_TB_OFFSET:
+ *val = get_reg_val(id, vcpu->arch.vcore->tb_offset);
+ break;
default:
r = -EINVAL;
break;
@@ -892,6 +895,11 @@ int kvmppc_set_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
len -= len % sizeof(struct dtl_entry);
r = set_vpa(vcpu, &vcpu->arch.dtl, addr, len);
break;
+ case KVM_REG_PPC_TB_OFFSET:
+ /* round up to multiple of 2^24 */
+ vcpu->arch.vcore->tb_offset =
+ ALIGN(set_reg_val(id, *val), 1UL << 24);
+ break;
default:
r = -EINVAL;
break;
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index bfb4b0a..85f8dd0 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -343,7 +343,22 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
bdnz 28b
ptesync
-22: li r0,1
+ /* Add timebase offset onto timebase */
+22: ld r8,VCORE_TB_OFFSET(r5)
+ cmpdi r8,0
+ beq 37f
+ mftb r6 /* current host timebase */
+ add r8,r8,r6
+ mtspr SPRN_TBU40,r8 /* update upper 40 bits */
+ mftb r7 /* check if lower 24 bits overflowed */
+ clrldi r6,r6,40
+ clrldi r7,r7,40
+ cmpld r7,r6
+ bge 37f
+ addis r8,r8,0x100 /* if so, increment upper 40 bits */
+ mtspr SPRN_TBU40,r8
+
+37: li r0,1
stb r0,VCORE_IN_GUEST(r5) /* signal secondaries to continue */
b 10f
@@ -774,13 +789,6 @@ ext_stash_for_host:
ext_interrupt_to_host:
guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
- /* Save DEC */
- mfspr r5,SPRN_DEC
- mftb r6
- extsw r5,r5
- add r5,r5,r6
- std r5,VCPU_DEC_EXPIRES(r9)
-
/* Save more register state */
mfdar r6
mfdsisr r7
@@ -950,7 +958,24 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
mtspr SPRN_SDR1,r6 /* switch to partition page table */
mtspr SPRN_LPID,r7
isync
- li r0,0
+
+ /* Subtract timebase offset from timebase */
+ ld r8,VCORE_TB_OFFSET(r5)
+ cmpdi r8,0
+ beq 17f
+ mftb r6 /* current host timebase */
+ subf r8,r8,r6
+ mtspr SPRN_TBU40,r8 /* update upper 40 bits */
+ mftb r7 /* check if lower 24 bits overflowed */
+ clrldi r6,r6,40
+ clrldi r7,r7,40
+ cmpld r7,r6
+ bge 17f
+ addis r8,r8,0x100 /* if so, increment upper 40 bits */
+ mtspr SPRN_TBU40,r8
+
+ /* Signal secondary CPUs to continue */
+17: li r0,0
stb r0,VCORE_IN_GUEST(r5)
lis r8,0x7fff /* MAX_INT@h */
mtspr SPRN_HDEC,r8
@@ -1044,6 +1069,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
1: addi r8,r8,16
.endr
+ /* Save DEC */
+ mfspr r5,SPRN_DEC
+ mftb r6
+ extsw r5,r5
+ add r5,r5,r6
+ std r5,VCPU_DEC_EXPIRES(r9)
+
/* Save and reset AMR and UAMOR before turning on the MMU */
BEGIN_FTR_SECTION
mfspr r5,SPRN_AMR
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 03/11] KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVE
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
2013-09-06 3:11 ` [PATCH 01/11] KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers Paul Mackerras
2013-09-06 3:17 ` [PATCH 02/11] KVM: PPC: Book3S HV: Implement timebase offset for guests Paul Mackerras
@ 2013-09-06 3:18 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:21 ` [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR Paul Mackerras
` (8 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:18 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
The VRSAVE register value for a vcpu is accessible through the
GET/SET_SREGS interface for Book E processors, but not for Book 3S
processors. In order to make this accessible for Book 3S processors,
this adds a new register identifier for GET/SET_ONE_REG, and adds
the code to implement it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
Documentation/virtual/kvm/api.txt | 1 +
arch/powerpc/include/uapi/asm/kvm.h | 2 ++
arch/powerpc/kvm/book3s.c | 10 ++++++++++
3 files changed, 13 insertions(+)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 9486e5a..c36ff9af 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1834,6 +1834,7 @@ registers, find a list below:
PPC | KVM_REG_PPC_TCSCR | 64
PPC | KVM_REG_PPC_PID | 64
PPC | KVM_REG_PPC_ACOP | 64
+ PPC | KVM_REG_PPC_VRSAVE | 32
PPC | KVM_REG_PPC_TM_GPR0 | 64
...
PPC | KVM_REG_PPC_TM_GPR31 | 64
diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
index a8124fe..b98bf3f 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -532,6 +532,8 @@ struct kvm_get_htab_header {
#define KVM_REG_PPC_PID (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb2)
#define KVM_REG_PPC_ACOP (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb3)
+#define KVM_REG_PPC_VRSAVE (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb4)
+
/* Transactional Memory checkpointed state:
* This is all GPRs, all VSX regs and a subset of SPRs
*/
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 700df6f..f97369d 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -528,6 +528,9 @@ int kvm_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg)
}
val = get_reg_val(reg->id, vcpu->arch.vscr.u[3]);
break;
+ case KVM_REG_PPC_VRSAVE:
+ val = get_reg_val(reg->id, vcpu->arch.vrsave);
+ break;
#endif /* CONFIG_ALTIVEC */
case KVM_REG_PPC_DEBUG_INST: {
u32 opcode = INS_TW;
@@ -605,6 +608,13 @@ int kvm_vcpu_ioctl_set_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg)
}
vcpu->arch.vscr.u[3] = set_reg_val(reg->id, val);
break;
+ case KVM_REG_PPC_VRSAVE:
+ if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
+ r = -ENXIO;
+ break;
+ }
+ vcpu->arch.vrsave = set_reg_val(reg->id, val);
+ break;
#endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_KVM_XICS
case KVM_REG_PPC_ICP_STATE:
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (2 preceding siblings ...)
2013-09-06 3:18 ` [PATCH 03/11] KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVE Paul Mackerras
@ 2013-09-06 3:21 ` Paul Mackerras
2013-09-13 18:36 ` Alexander Graf
2013-09-06 3:22 ` [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register Paul Mackerras
` (7 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:21 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
This adds the ability for userspace to read and write the LPCR
(Logical Partitioning Control Register) value relating to a guest
via the GET/SET_ONE_REG interface. There is only one LPCR value
for the guest, which can be accessed through any vcpu. Userspace
can only modify the following fields of the LPCR value:
DPFD Default prefetch depth
ILE Interrupt little-endian
TC Translation control (secondary HPT hash group search disable)
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
Documentation/virtual/kvm/api.txt | 1 +
arch/powerpc/include/asm/reg.h | 2 ++
arch/powerpc/include/uapi/asm/kvm.h | 1 +
arch/powerpc/kvm/book3s_hv.c | 21 +++++++++++++++++++++
4 files changed, 25 insertions(+)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index c36ff9af..1030ac9 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1835,6 +1835,7 @@ registers, find a list below:
PPC | KVM_REG_PPC_PID | 64
PPC | KVM_REG_PPC_ACOP | 64
PPC | KVM_REG_PPC_VRSAVE | 32
+ PPC | KVM_REG_PPC_LPCR | 64
PPC | KVM_REG_PPC_TM_GPR0 | 64
...
PPC | KVM_REG_PPC_TM_GPR31 | 64
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 342e4ea..3fc0d06 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -275,6 +275,7 @@
#define LPCR_ISL (1ul << (63-2))
#define LPCR_VC_SH (63-2)
#define LPCR_DPFD_SH (63-11)
+#define LPCR_DPFD (7ul << LPCR_DPFD_SH)
#define LPCR_VRMASD (0x1ful << (63-16))
#define LPCR_VRMA_L (1ul << (63-12))
#define LPCR_VRMA_LP0 (1ul << (63-15))
@@ -291,6 +292,7 @@
#define LPCR_PECE2 0x00001000 /* machine check etc can cause exit */
#define LPCR_MER 0x00000800 /* Mediated External Exception */
#define LPCR_MER_SH 11
+#define LPCR_TC 0x00000200 /* Translation control */
#define LPCR_LPES 0x0000000c
#define LPCR_LPES0 0x00000008 /* LPAR Env selector 0 */
#define LPCR_LPES1 0x00000004 /* LPAR Env selector 1 */
diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
index b98bf3f..e42127d 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -533,6 +533,7 @@ struct kvm_get_htab_header {
#define KVM_REG_PPC_ACOP (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb3)
#define KVM_REG_PPC_VRSAVE (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb4)
+#define KVM_REG_PPC_LPCR (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb5)
/* Transactional Memory checkpointed state:
* This is all GPRs, all VSX regs and a subset of SPRs
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index b930caf..9c878d7 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -714,6 +714,21 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
return 0;
}
+static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr)
+{
+ struct kvm *kvm = vcpu->kvm;
+ u64 mask;
+
+ mutex_lock(&kvm->lock);
+ /*
+ * Userspace can only modify DPFD (default prefetch depth),
+ * ILE (interrupt little-endian) and TC (translation control).
+ */
+ mask = LPCR_DPFD | LPCR_ILE | LPCR_TC;
+ kvm->arch.lpcr = (kvm->arch.lpcr & ~mask) | (new_lpcr & mask);
+ mutex_unlock(&kvm->lock);
+}
+
int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
{
int r = 0;
@@ -796,6 +811,9 @@ int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
case KVM_REG_PPC_TB_OFFSET:
*val = get_reg_val(id, vcpu->arch.vcore->tb_offset);
break;
+ case KVM_REG_PPC_LPCR:
+ *val = get_reg_val(id, vcpu->kvm->arch.lpcr);
+ break;
default:
r = -EINVAL;
break;
@@ -900,6 +918,9 @@ int kvmppc_set_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
vcpu->arch.vcore->tb_offset =
ALIGN(set_reg_val(id, *val), 1UL << 24);
break;
+ case KVM_REG_PPC_LPCR:
+ kvmppc_set_lpcr(vcpu, set_reg_val(id, *val));
+ break;
default:
r = -EINVAL;
break;
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (3 preceding siblings ...)
2013-09-06 3:21 ` [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR Paul Mackerras
@ 2013-09-06 3:22 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-17 3:29 ` Benjamin Herrenschmidt
2013-09-06 3:22 ` [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7 Paul Mackerras
` (6 subsequent siblings)
11 siblings, 2 replies; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:22 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
POWER7 and later IBM server processors have a register called the
Program Priority Register (PPR), which controls the priority of
each hardware CPU SMT thread, and affects how fast it runs compared
to other SMT threads. This priority can be controlled by writing to
the PPR or by use of a set of instructions of the form or rN,rN,rN
which are otherwise no-ops but have been defined to set the priority
to particular levels.
This adds code to context switch the PPR when entering and exiting
guests and to make the PPR value accessible through the SET/GET_ONE_REG
interface. When entering the guest, we set the PPR as late as
possible, because if we are setting a low thread priority it will
make the code run slowly from that point on. Similarly, the
first-level interrupt handlers save the PPR value in the PACA very
early on, and set the thread priority to the medium level, so that
the interrupt handling code runs at a reasonable speed.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
Documentation/virtual/kvm/api.txt | 1 +
arch/powerpc/include/asm/exception-64s.h | 8 ++++++++
arch/powerpc/include/asm/kvm_book3s_asm.h | 1 +
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/include/uapi/asm/kvm.h | 1 +
arch/powerpc/kernel/asm-offsets.c | 2 ++
arch/powerpc/kvm/book3s_hv.c | 6 ++++++
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 12 +++++++++++-
8 files changed, 31 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 1030ac9..34a32b6 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1836,6 +1836,7 @@ registers, find a list below:
PPC | KVM_REG_PPC_ACOP | 64
PPC | KVM_REG_PPC_VRSAVE | 32
PPC | KVM_REG_PPC_LPCR | 64
+ PPC | KVM_REG_PPC_PPR | 64
PPC | KVM_REG_PPC_TM_GPR0 | 64
...
PPC | KVM_REG_PPC_TM_GPR31 | 64
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 07ca627..b86c4db 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -203,6 +203,10 @@ do_kvm_##n: \
ld r10,area+EX_CFAR(r13); \
std r10,HSTATE_CFAR(r13); \
END_FTR_SECTION_NESTED(CPU_FTR_CFAR,CPU_FTR_CFAR,947); \
+ BEGIN_FTR_SECTION_NESTED(948) \
+ ld r10,area+EX_PPR(r13); \
+ std r10,HSTATE_PPR(r13); \
+ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \
ld r10,area+EX_R10(r13); \
stw r9,HSTATE_SCRATCH1(r13); \
ld r9,area+EX_R9(r13); \
@@ -216,6 +220,10 @@ do_kvm_##n: \
ld r10,area+EX_R10(r13); \
beq 89f; \
stw r9,HSTATE_SCRATCH1(r13); \
+ BEGIN_FTR_SECTION_NESTED(948) \
+ ld r9,area+EX_PPR(r13); \
+ std r9,HSTATE_PPR(r13); \
+ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \
ld r9,area+EX_R9(r13); \
std r12,HSTATE_SCRATCH0(r13); \
li r12,n; \
diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
index 9039d3c..22f4606 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -101,6 +101,7 @@ struct kvmppc_host_state {
#endif
#ifdef CONFIG_PPC_BOOK3S_64
u64 cfar;
+ u64 ppr;
#endif
};
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 9741bf0..b0dcd18 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -464,6 +464,7 @@ struct kvm_vcpu_arch {
u32 ctrl;
ulong dabr;
ulong cfar;
+ ulong ppr;
#endif
u32 vrsave; /* also USPRG0 */
u32 mmucr;
diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
index e42127d..fab6bc1 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -534,6 +534,7 @@ struct kvm_get_htab_header {
#define KVM_REG_PPC_VRSAVE (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb4)
#define KVM_REG_PPC_LPCR (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb5)
+#define KVM_REG_PPC_PPR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb6)
/* Transactional Memory checkpointed state:
* This is all GPRs, all VSX regs and a subset of SPRs
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index ccb42cd..5c6ea96 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -516,6 +516,7 @@ int main(void)
DEFINE(VCPU_TRAP, offsetof(struct kvm_vcpu, arch.trap));
DEFINE(VCPU_PTID, offsetof(struct kvm_vcpu, arch.ptid));
DEFINE(VCPU_CFAR, offsetof(struct kvm_vcpu, arch.cfar));
+ DEFINE(VCPU_PPR, offsetof(struct kvm_vcpu, arch.ppr));
DEFINE(VCORE_ENTRY_EXIT, offsetof(struct kvmppc_vcore, entry_exit_count));
DEFINE(VCORE_NAP_COUNT, offsetof(struct kvmppc_vcore, nap_count));
DEFINE(VCORE_IN_GUEST, offsetof(struct kvmppc_vcore, in_guest));
@@ -600,6 +601,7 @@ int main(void)
#ifdef CONFIG_PPC_BOOK3S_64
HSTATE_FIELD(HSTATE_CFAR, cfar);
+ HSTATE_FIELD(HSTATE_PPR, ppr);
#endif /* CONFIG_PPC_BOOK3S_64 */
#else /* CONFIG_PPC_BOOK3S */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 9c878d7..eceff7e 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -814,6 +814,9 @@ int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
case KVM_REG_PPC_LPCR:
*val = get_reg_val(id, vcpu->kvm->arch.lpcr);
break;
+ case KVM_REG_PPC_PPR:
+ *val = get_reg_val(id, vcpu->arch.ppr);
+ break;
default:
r = -EINVAL;
break;
@@ -921,6 +924,9 @@ int kvmppc_set_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
case KVM_REG_PPC_LPCR:
kvmppc_set_lpcr(vcpu, set_reg_val(id, *val));
break;
+ case KVM_REG_PPC_PPR:
+ vcpu->arch.ppr = set_reg_val(id, *val);
+ break;
default:
r = -EINVAL;
break;
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 85f8dd0..88e7068 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -561,13 +561,15 @@ BEGIN_FTR_SECTION
ld r5, VCPU_CFAR(r4)
mtspr SPRN_CFAR, r5
END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+BEGIN_FTR_SECTION
+ ld r0, VCPU_PPR(r4)
+END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
ld r5, VCPU_LR(r4)
lwz r6, VCPU_CR(r4)
mtlr r5
mtcr r6
- ld r0, VCPU_GPR(R0)(r4)
ld r1, VCPU_GPR(R1)(r4)
ld r2, VCPU_GPR(R2)(r4)
ld r3, VCPU_GPR(R3)(r4)
@@ -581,6 +583,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
ld r12, VCPU_GPR(R12)(r4)
ld r13, VCPU_GPR(R13)(r4)
+BEGIN_FTR_SECTION
+ mtspr SPRN_PPR, r0
+END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+ ld r0, VCPU_GPR(R0)(r4)
ld r4, VCPU_GPR(R4)(r4)
hrfid
@@ -631,6 +637,10 @@ BEGIN_FTR_SECTION
ld r3, HSTATE_CFAR(r13)
std r3, VCPU_CFAR(r9)
END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+BEGIN_FTR_SECTION
+ ld r4, HSTATE_PPR(r13)
+ std r4, VCPU_PPR(r9)
+END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
/* Restore R1/R2 so we can handle faults */
ld r1, HSTATE_HOST_R1(r13)
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (4 preceding siblings ...)
2013-09-06 3:22 ` [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register Paul Mackerras
@ 2013-09-06 3:22 ` Paul Mackerras
2013-09-06 5:28 ` Aneesh Kumar K.V
2013-09-13 19:58 ` Alexander Graf
2013-09-06 3:23 ` [PATCH 07/11] KVM: PPC: Book3S HV: Implement H_CONFER Paul Mackerras
` (5 subsequent siblings)
11 siblings, 2 replies; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:22 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
This enables us to use the Processor Compatibility Register (PCR) on
POWER7 to put the processor into architecture 2.05 compatibility mode
when running a guest. In this mode the new instructions and registers
that were introduced on POWER7 are disabled in user mode. This
includes all the VSX facilities plus several other instructions such
as ldbrx, stdbrx, popcntw, popcntd, etc.
To select this mode, we have a new register accessible through the
set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT. Setting
this to zero gives the full set of capabilities of the processor.
Setting it to one of the "logical" PVR values defined in PAPR puts
the vcpu into the compatibility mode for the corresponding
architecture level. The supported values are:
0x0f000002 Architecture 2.05 (POWER6)
0x0f000003 Architecture 2.06 (POWER7)
0x0f100003 Architecture 2.06+ (POWER7+)
Since the PCR is per-core, the architecture compatibility level and
the corresponding PCR value are stored in the struct kvmppc_vcore, and
are therefore shared between all vcpus in a virtual core.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
Documentation/virtual/kvm/api.txt | 1 +
arch/powerpc/include/asm/kvm_host.h | 2 ++
arch/powerpc/include/asm/reg.h | 11 +++++++++++
arch/powerpc/include/uapi/asm/kvm.h | 3 +++
arch/powerpc/kernel/asm-offsets.c | 1 +
arch/powerpc/kvm/book3s_hv.c | 35 +++++++++++++++++++++++++++++++++
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 11 +++++++++--
7 files changed, 62 insertions(+), 2 deletions(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 34a32b6..f1f300f 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1837,6 +1837,7 @@ registers, find a list below:
PPC | KVM_REG_PPC_VRSAVE | 32
PPC | KVM_REG_PPC_LPCR | 64
PPC | KVM_REG_PPC_PPR | 64
+ PPC | KVM_REG_PPC_ARCH_COMPAT | 32
PPC | KVM_REG_PPC_TM_GPR0 | 64
...
PPC | KVM_REG_PPC_TM_GPR31 | 64
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index b0dcd18..5a40270 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -295,6 +295,8 @@ struct kvmppc_vcore {
u64 preempt_tb;
struct kvm_vcpu *runner;
u64 tb_offset; /* guest timebase - host timebase */
+ u32 arch_compat;
+ ulong pcr;
};
#define VCORE_ENTRY_COUNT(vc) ((vc)->entry_exit_count & 0xff)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 3fc0d06..52ff962 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -305,6 +305,10 @@
#define LPID_RSVD 0x3ff /* Reserved LPID for partn switching */
#define SPRN_HMER 0x150 /* Hardware m? error recovery */
#define SPRN_HMEER 0x151 /* Hardware m? enable error recovery */
+#define SPRN_PCR 0x152 /* Processor compatibility register */
+#define PCR_VEC_DIS (1ul << (63-0)) /* Vec. disable (pre POWER8) */
+#define PCR_VSX_DIS (1ul << (63-1)) /* VSX disable (pre POWER8) */
+#define PCR_ARCH_205 0x2 /* Architecture 2.05 */
#define SPRN_HEIR 0x153 /* Hypervisor Emulated Instruction Register */
#define SPRN_TLBINDEXR 0x154 /* P7 TLB control register */
#define SPRN_TLBVPNR 0x155 /* P7 TLB control register */
@@ -1095,6 +1099,13 @@
#define PVR_BE 0x0070
#define PVR_PA6T 0x0090
+/* "Logical" PVR values defined in PAPR, representing architecture levels */
+#define PVR_ARCH_204 0x0f000001
+#define PVR_ARCH_205 0x0f000002
+#define PVR_ARCH_206 0x0f000003
+#define PVR_ARCH_206p 0x0f100003
+#define PVR_ARCH_207 0x0f000004
+
/* Macros for setting and retrieving special purpose registers */
#ifndef __ASSEMBLY__
#define mfmsr() ({unsigned long rval; \
diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
index fab6bc1..e420d46 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -536,6 +536,9 @@ struct kvm_get_htab_header {
#define KVM_REG_PPC_LPCR (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb5)
#define KVM_REG_PPC_PPR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb6)
+/* Architecture compatibility level */
+#define KVM_REG_PPC_ARCH_COMPAT (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb7)
+
/* Transactional Memory checkpointed state:
* This is all GPRs, all VSX regs and a subset of SPRs
*/
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 5c6ea96..115dd64 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -522,6 +522,7 @@ int main(void)
DEFINE(VCORE_IN_GUEST, offsetof(struct kvmppc_vcore, in_guest));
DEFINE(VCORE_NAPPING_THREADS, offsetof(struct kvmppc_vcore, napping_threads));
DEFINE(VCORE_TB_OFFSET, offsetof(struct kvmppc_vcore, tb_offset));
+ DEFINE(VCORE_PCR, offsetof(struct kvmppc_vcore, pcr));
DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
offsetof(struct kvmppc_vcpu_book3s, vcpu));
DEFINE(VCPU_SLB_E, offsetof(struct kvmppc_slb, orige));
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index eceff7e..1a10afa 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -166,6 +166,35 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
vcpu->arch.pvr = pvr;
}
+int kvmppc_set_arch_compat(struct kvm_vcpu *vcpu, u32 arch_compat)
+{
+ unsigned long pcr = 0;
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+ if (arch_compat) {
+ if (!cpu_has_feature(CPU_FTR_ARCH_206))
+ return -EINVAL; /* 970 has no compat mode support */
+
+ switch (arch_compat) {
+ case PVR_ARCH_205:
+ pcr = PCR_ARCH_205;
+ break;
+ case PVR_ARCH_206:
+ case PVR_ARCH_206p:
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ spin_lock(&vc->lock);
+ vc->arch_compat = arch_compat;
+ vc->pcr = pcr;
+ spin_unlock(&vc->lock);
+
+ return 0;
+}
+
void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
{
int r;
@@ -817,6 +846,9 @@ int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
case KVM_REG_PPC_PPR:
*val = get_reg_val(id, vcpu->arch.ppr);
break;
+ case KVM_REG_PPC_ARCH_COMPAT:
+ *val = get_reg_val(id, vcpu->arch.vcore->arch_compat);
+ break;
default:
r = -EINVAL;
break;
@@ -927,6 +959,9 @@ int kvmppc_set_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
case KVM_REG_PPC_PPR:
vcpu->arch.ppr = set_reg_val(id, *val);
break;
+ case KVM_REG_PPC_ARCH_COMPAT:
+ r = kvmppc_set_arch_compat(vcpu, set_reg_val(id, *val));
+ break;
default:
r = -EINVAL;
break;
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 88e7068..023d8600 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -358,7 +358,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
addis r8,r8,0x100 /* if so, increment upper 40 bits */
mtspr SPRN_TBU40,r8
-37: li r0,1
+ /* Load guest PCR value to select appropriate compat mode */
+37: ld r7, VCORE_PCR(r5)
+ mtspr SPRN_PCR, r7
+
+ li r0,1
stb r0,VCORE_IN_GUEST(r5) /* signal secondaries to continue */
b 10f
@@ -984,8 +988,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
addis r8,r8,0x100 /* if so, increment upper 40 bits */
mtspr SPRN_TBU40,r8
- /* Signal secondary CPUs to continue */
+ /* Reset PCR */
17: li r0,0
+ mtspr SPRN_PCR,r0
+
+ /* Signal secondary CPUs to continue */
stb r0,VCORE_IN_GUEST(r5)
lis r8,0x7fff /* MAX_INT@h */
mtspr SPRN_HDEC,r8
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 07/11] KVM: PPC: Book3S HV: Implement H_CONFER
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (5 preceding siblings ...)
2013-09-06 3:22 ` [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7 Paul Mackerras
@ 2013-09-06 3:23 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:23 ` [PATCH 08/11] KVM: PPC: Book3S HV: Restructure kvmppc_hv_entry to be a subroutine Paul Mackerras
` (4 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:23 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
The H_CONFER hypercall is used when a guest vcpu is spinning on a lock
held by another vcpu which has been preempted, and the spinning vcpu
wishes to give its timeslice to the lock holder. We implement this
in the straightforward way using kvm_vcpu_yield_to().
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kvm/book3s_hv.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 1a10afa..0bb23a9 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -567,6 +567,15 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
}
break;
case H_CONFER:
+ target = kvmppc_get_gpr(vcpu, 4);
+ if (target == -1)
+ break;
+ tvcpu = kvmppc_find_vcpu(vcpu->kvm, target);
+ if (!tvcpu) {
+ ret = H_PARAMETER;
+ break;
+ }
+ kvm_vcpu_yield_to(tvcpu);
break;
case H_REGISTER_VPA:
ret = do_h_register_vpa(vcpu, kvmppc_get_gpr(vcpu, 4),
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 08/11] KVM: PPC: Book3S HV: Restructure kvmppc_hv_entry to be a subroutine
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (6 preceding siblings ...)
2013-09-06 3:23 ` [PATCH 07/11] KVM: PPC: Book3S HV: Implement H_CONFER Paul Mackerras
@ 2013-09-06 3:23 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:24 ` [PATCH 09/11] KVM: PPC: Book3S HV: Pull out interrupt-reading code into " Paul Mackerras
` (3 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:23 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
We have two paths into and out of the low-level guest entry and exit
code: from a vcpu task via kvmppc_hv_entry_trampoline, and from the
system reset vector for an offline secondary thread on POWER7 via
kvm_start_guest. Currently both just branch to kvmppc_hv_entry to
enter the guest, and on guest exit, we test the vcpu physical thread
ID to detect which way we came in and thus whether we should return
to the vcpu task or go back to nap mode.
In order to make the code flow clearer, and to keep the code relating
to each flow together, this turns kvmppc_hv_entry into a subroutine
that follows the normal conventions for call and return. This means
that kvmppc_hv_entry_trampoline() and kvmppc_hv_entry() now establish
normal stack frames, and we use the normal stack slots for saving
return addresses rather than local_paca->kvm_hstate.vmhandler. Apart
from that this is mostly moving code around unchanged.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 344 +++++++++++++++++---------------
1 file changed, 178 insertions(+), 166 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 023d8600..d9ab139 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -62,8 +62,11 @@ kvmppc_skip_Hinterrupt:
* LR = return address to continue at after eventually re-enabling MMU
*/
_GLOBAL(kvmppc_hv_entry_trampoline)
+ mflr r0
+ std r0, PPC_LR_STKOFF(r1)
+ stdu r1, -112(r1)
mfmsr r10
- LOAD_REG_ADDR(r5, kvmppc_hv_entry)
+ LOAD_REG_ADDR(r5, kvmppc_call_hv_entry)
li r0,MSR_RI
andc r0,r10,r0
li r6,MSR_IR | MSR_DR
@@ -73,11 +76,103 @@ _GLOBAL(kvmppc_hv_entry_trampoline)
mtsrr1 r6
RFI
-/******************************************************************************
- * *
- * Entry code *
- * *
- *****************************************************************************/
+kvmppc_call_hv_entry:
+ bl kvmppc_hv_entry
+
+ /* Back from guest - restore host state and return to caller */
+
+ /* Restore host DABR and DABRX */
+ ld r5,HSTATE_DABR(r13)
+ li r6,7
+ mtspr SPRN_DABR,r5
+ mtspr SPRN_DABRX,r6
+
+ /* Restore SPRG3 */
+ ld r3,PACA_SPRG3(r13)
+ mtspr SPRN_SPRG3,r3
+
+ /*
+ * Reload DEC. HDEC interrupts were disabled when
+ * we reloaded the host's LPCR value.
+ */
+ ld r3, HSTATE_DECEXP(r13)
+ mftb r4
+ subf r4, r4, r3
+ mtspr SPRN_DEC, r4
+
+ /* Reload the host's PMU registers */
+ ld r3, PACALPPACAPTR(r13) /* is the host using the PMU? */
+ lbz r4, LPPACA_PMCINUSE(r3)
+ cmpwi r4, 0
+ beq 23f /* skip if not */
+ lwz r3, HSTATE_PMC(r13)
+ lwz r4, HSTATE_PMC + 4(r13)
+ lwz r5, HSTATE_PMC + 8(r13)
+ lwz r6, HSTATE_PMC + 12(r13)
+ lwz r8, HSTATE_PMC + 16(r13)
+ lwz r9, HSTATE_PMC + 20(r13)
+BEGIN_FTR_SECTION
+ lwz r10, HSTATE_PMC + 24(r13)
+ lwz r11, HSTATE_PMC + 28(r13)
+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
+ mtspr SPRN_PMC1, r3
+ mtspr SPRN_PMC2, r4
+ mtspr SPRN_PMC3, r5
+ mtspr SPRN_PMC4, r6
+ mtspr SPRN_PMC5, r8
+ mtspr SPRN_PMC6, r9
+BEGIN_FTR_SECTION
+ mtspr SPRN_PMC7, r10
+ mtspr SPRN_PMC8, r11
+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
+ ld r3, HSTATE_MMCR(r13)
+ ld r4, HSTATE_MMCR + 8(r13)
+ ld r5, HSTATE_MMCR + 16(r13)
+ mtspr SPRN_MMCR1, r4
+ mtspr SPRN_MMCRA, r5
+ mtspr SPRN_MMCR0, r3
+ isync
+23:
+
+ /*
+ * For external and machine check interrupts, we need
+ * to call the Linux handler to process the interrupt.
+ * We do that by jumping to absolute address 0x500 for
+ * external interrupts, or the machine_check_fwnmi label
+ * for machine checks (since firmware might have patched
+ * the vector area at 0x200). The [h]rfid at the end of the
+ * handler will return to the book3s_hv_interrupts.S code.
+ * For other interrupts we do the rfid to get back
+ * to the book3s_hv_interrupts.S code here.
+ */
+ ld r8, 112+PPC_LR_STKOFF(r1)
+ addi r1, r1, 112
+ ld r7, HSTATE_HOST_MSR(r13)
+
+ cmpwi cr1, r12, BOOK3S_INTERRUPT_MACHINE_CHECK
+ cmpwi r12, BOOK3S_INTERRUPT_EXTERNAL
+BEGIN_FTR_SECTION
+ beq 11f
+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
+
+ /* RFI into the highmem handler, or branch to interrupt handler */
+ mfmsr r6
+ li r0, MSR_RI
+ andc r6, r6, r0
+ mtmsrd r6, 1 /* Clear RI in MSR */
+ mtsrr0 r8
+ mtsrr1 r7
+ beqa 0x500 /* external interrupt (PPC970) */
+ beq cr1, 13f /* machine check */
+ RFI
+
+ /* On POWER7, we have external interrupts set to use HSRR0/1 */
+11: mtspr SPRN_HSRR0, r8
+ mtspr SPRN_HSRR1, r7
+ ba 0x500
+
+13: b machine_check_fwnmi
+
/*
* We come in here when wakened from nap mode on a secondary hw thread.
@@ -133,7 +228,7 @@ kvm_start_guest:
cmpdi r4,0
/* if we have no vcpu to run, go back to sleep */
beq kvm_no_guest
- b kvmppc_hv_entry
+ b 30f
27: /* XXX should handle hypervisor maintenance interrupts etc. here */
b kvm_no_guest
@@ -143,6 +238,57 @@ kvm_start_guest:
stw r8,HSTATE_SAVED_XIRR(r13)
b kvm_no_guest
+30: bl kvmppc_hv_entry
+
+ /* Back from the guest, go back to nap */
+ /* Clear our vcpu pointer so we don't come back in early */
+ li r0, 0
+ std r0, HSTATE_KVM_VCPU(r13)
+ lwsync
+ /* Clear any pending IPI - we're an offline thread */
+ ld r5, HSTATE_XICS_PHYS(r13)
+ li r7, XICS_XIRR
+ lwzcix r3, r5, r7 /* ack any pending interrupt */
+ rlwinm. r0, r3, 0, 0xffffff /* any pending? */
+ beq 37f
+ sync
+ li r0, 0xff
+ li r6, XICS_MFRR
+ stbcix r0, r5, r6 /* clear the IPI */
+ stwcix r3, r5, r7 /* EOI it */
+37: sync
+
+ /* increment the nap count and then go to nap mode */
+ ld r4, HSTATE_KVM_VCORE(r13)
+ addi r4, r4, VCORE_NAP_COUNT
+ lwsync /* make previous updates visible */
+51: lwarx r3, 0, r4
+ addi r3, r3, 1
+ stwcx. r3, 0, r4
+ bne 51b
+
+kvm_no_guest:
+ li r0, KVM_HWTHREAD_IN_NAP
+ stb r0, HSTATE_HWTHREAD_STATE(r13)
+ li r3, LPCR_PECE0
+ mfspr r4, SPRN_LPCR
+ rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1
+ mtspr SPRN_LPCR, r4
+ isync
+ std r0, HSTATE_SCRATCH0(r13)
+ ptesync
+ ld r0, HSTATE_SCRATCH0(r13)
+1: cmpd r0, r0
+ bne 1b
+ nap
+ b .
+
+/******************************************************************************
+ * *
+ * Entry code *
+ * *
+ *****************************************************************************/
+
.global kvmppc_hv_entry
kvmppc_hv_entry:
@@ -155,7 +301,8 @@ kvmppc_hv_entry:
* all other volatile GPRS = free
*/
mflr r0
- std r0, HSTATE_VMHANDLER(r13)
+ std r0, PPC_LR_STKOFF(r1)
+ stdu r1, -112(r1)
/* Set partition DABR */
/* Do this before re-enabling PMU to avoid P7 DABR corruption bug */
@@ -1203,103 +1350,30 @@ BEGIN_FTR_SECTION
stw r11, VCPU_PMC + 28(r9)
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
22:
+ ld r0, 112+PPC_LR_STKOFF(r1)
+ addi r1, r1, 112
+ mtlr r0
+ blr
+secondary_too_late:
+ ld r5,HSTATE_KVM_VCORE(r13)
+ HMT_LOW
+13: lbz r3,VCORE_IN_GUEST(r5)
+ cmpwi r3,0
+ bne 13b
+ HMT_MEDIUM
+ li r0, KVM_GUEST_MODE_NONE
+ stb r0, HSTATE_IN_GUEST(r13)
+ ld r11,PACA_SLBSHADOWPTR(r13)
- /* Secondary threads go off to take a nap on POWER7 */
-BEGIN_FTR_SECTION
- lwz r0,VCPU_PTID(r9)
- cmpwi r0,0
- bne secondary_nap
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
-
- /* Restore host DABR and DABRX */
- ld r5,HSTATE_DABR(r13)
- li r6,7
- mtspr SPRN_DABR,r5
- mtspr SPRN_DABRX,r6
-
- /* Restore SPRG3 */
- ld r3,PACA_SPRG3(r13)
- mtspr SPRN_SPRG3,r3
-
- /*
- * Reload DEC. HDEC interrupts were disabled when
- * we reloaded the host's LPCR value.
- */
- ld r3, HSTATE_DECEXP(r13)
- mftb r4
- subf r4, r4, r3
- mtspr SPRN_DEC, r4
-
- /* Reload the host's PMU registers */
- ld r3, PACALPPACAPTR(r13) /* is the host using the PMU? */
- lbz r4, LPPACA_PMCINUSE(r3)
- cmpwi r4, 0
- beq 23f /* skip if not */
- lwz r3, HSTATE_PMC(r13)
- lwz r4, HSTATE_PMC + 4(r13)
- lwz r5, HSTATE_PMC + 8(r13)
- lwz r6, HSTATE_PMC + 12(r13)
- lwz r8, HSTATE_PMC + 16(r13)
- lwz r9, HSTATE_PMC + 20(r13)
-BEGIN_FTR_SECTION
- lwz r10, HSTATE_PMC + 24(r13)
- lwz r11, HSTATE_PMC + 28(r13)
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
- mtspr SPRN_PMC1, r3
- mtspr SPRN_PMC2, r4
- mtspr SPRN_PMC3, r5
- mtspr SPRN_PMC4, r6
- mtspr SPRN_PMC5, r8
- mtspr SPRN_PMC6, r9
-BEGIN_FTR_SECTION
- mtspr SPRN_PMC7, r10
- mtspr SPRN_PMC8, r11
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
- ld r3, HSTATE_MMCR(r13)
- ld r4, HSTATE_MMCR + 8(r13)
- ld r5, HSTATE_MMCR + 16(r13)
- mtspr SPRN_MMCR1, r4
- mtspr SPRN_MMCRA, r5
- mtspr SPRN_MMCR0, r3
- isync
-23:
- /*
- * For external and machine check interrupts, we need
- * to call the Linux handler to process the interrupt.
- * We do that by jumping to absolute address 0x500 for
- * external interrupts, or the machine_check_fwnmi label
- * for machine checks (since firmware might have patched
- * the vector area at 0x200). The [h]rfid at the end of the
- * handler will return to the book3s_hv_interrupts.S code.
- * For other interrupts we do the rfid to get back
- * to the book3s_hv_interrupts.S code here.
- */
- ld r8, HSTATE_VMHANDLER(r13)
- ld r7, HSTATE_HOST_MSR(r13)
-
- cmpwi cr1, r12, BOOK3S_INTERRUPT_MACHINE_CHECK
- cmpwi r12, BOOK3S_INTERRUPT_EXTERNAL
-BEGIN_FTR_SECTION
- beq 11f
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
-
- /* RFI into the highmem handler, or branch to interrupt handler */
- mfmsr r6
- li r0, MSR_RI
- andc r6, r6, r0
- mtmsrd r6, 1 /* Clear RI in MSR */
- mtsrr0 r8
- mtsrr1 r7
- beqa 0x500 /* external interrupt (PPC970) */
- beq cr1, 13f /* machine check */
- RFI
-
- /* On POWER7, we have external interrupts set to use HSRR0/1 */
-11: mtspr SPRN_HSRR0, r8
- mtspr SPRN_HSRR1, r7
- ba 0x500
-
-13: b machine_check_fwnmi
+ .rept SLB_NUM_BOLTED
+ ld r5,SLBSHADOW_SAVEAREA(r11)
+ ld r6,SLBSHADOW_SAVEAREA+8(r11)
+ andis. r7,r5,SLB_ESID_V@h
+ beq 1f
+ slbmte r6,r5
+1: addi r11,r11,16
+ .endr
+ b 22b
/*
* Check whether an HDSI is an HPTE not found fault or something else.
@@ -1746,68 +1820,6 @@ machine_check_realmode:
rotldi r11, r11, 63
b fast_interrupt_c_return
-secondary_too_late:
- ld r5,HSTATE_KVM_VCORE(r13)
- HMT_LOW
-13: lbz r3,VCORE_IN_GUEST(r5)
- cmpwi r3,0
- bne 13b
- HMT_MEDIUM
- ld r11,PACA_SLBSHADOWPTR(r13)
-
- .rept SLB_NUM_BOLTED
- ld r5,SLBSHADOW_SAVEAREA(r11)
- ld r6,SLBSHADOW_SAVEAREA+8(r11)
- andis. r7,r5,SLB_ESID_V@h
- beq 1f
- slbmte r6,r5
-1: addi r11,r11,16
- .endr
-
-secondary_nap:
- /* Clear our vcpu pointer so we don't come back in early */
- li r0, 0
- std r0, HSTATE_KVM_VCPU(r13)
- lwsync
- /* Clear any pending IPI - assume we're a secondary thread */
- ld r5, HSTATE_XICS_PHYS(r13)
- li r7, XICS_XIRR
- lwzcix r3, r5, r7 /* ack any pending interrupt */
- rlwinm. r0, r3, 0, 0xffffff /* any pending? */
- beq 37f
- sync
- li r0, 0xff
- li r6, XICS_MFRR
- stbcix r0, r5, r6 /* clear the IPI */
- stwcix r3, r5, r7 /* EOI it */
-37: sync
-
- /* increment the nap count and then go to nap mode */
- ld r4, HSTATE_KVM_VCORE(r13)
- addi r4, r4, VCORE_NAP_COUNT
- lwsync /* make previous updates visible */
-51: lwarx r3, 0, r4
- addi r3, r3, 1
- stwcx. r3, 0, r4
- bne 51b
-
-kvm_no_guest:
- li r0, KVM_HWTHREAD_IN_NAP
- stb r0, HSTATE_HWTHREAD_STATE(r13)
-
- li r3, LPCR_PECE0
- mfspr r4, SPRN_LPCR
- rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1
- mtspr SPRN_LPCR, r4
- isync
- std r0, HSTATE_SCRATCH0(r13)
- ptesync
- ld r0, HSTATE_SCRATCH0(r13)
-1: cmpd r0, r0
- bne 1b
- nap
- b .
-
/*
* Save away FP, VMX and VSX registers.
* r3 = vcpu pointer
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 09/11] KVM: PPC: Book3S HV: Pull out interrupt-reading code into a subroutine
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (7 preceding siblings ...)
2013-09-06 3:23 ` [PATCH 08/11] KVM: PPC: Book3S HV: Restructure kvmppc_hv_entry to be a subroutine Paul Mackerras
@ 2013-09-06 3:24 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:24 ` [PATCH 10/11] KVM: PPC: Book3S HV: Avoid unbalanced increments of VPA yield count Paul Mackerras
` (2 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:24 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
This moves the code in book3s_hv_rmhandlers.S that reads any pending
interrupt from the XICS interrupt controller, and works out whether
it is an IPI for the guest, an IPI for the host, or a device interrupt,
into a new function called kvmppc_read_intr. Later patches will
need this.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 117 +++++++++++++++++++-------------
1 file changed, 68 insertions(+), 49 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index d9ab139..01515b6 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -868,46 +868,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_206)
* set, we know the host wants us out so let's do it now
*/
do_ext_interrupt:
- lbz r0, HSTATE_HOST_IPI(r13)
- cmpwi r0, 0
- bne ext_interrupt_to_host
-
- /* Now read the interrupt from the ICP */
- ld r5, HSTATE_XICS_PHYS(r13)
- li r7, XICS_XIRR
- cmpdi r5, 0
- beq- ext_interrupt_to_host
- lwzcix r3, r5, r7
- rlwinm. r0, r3, 0, 0xffffff
- sync
- beq 3f /* if nothing pending in the ICP */
-
- /* We found something in the ICP...
- *
- * If it's not an IPI, stash it in the PACA and return to
- * the host, we don't (yet) handle directing real external
- * interrupts directly to the guest
- */
- cmpwi r0, XICS_IPI
- bne ext_stash_for_host
-
- /* It's an IPI, clear the MFRR and EOI it */
- li r0, 0xff
- li r6, XICS_MFRR
- stbcix r0, r5, r6 /* clear the IPI */
- stwcix r3, r5, r7 /* EOI it */
- sync
-
- /* We need to re-check host IPI now in case it got set in the
- * meantime. If it's clear, we bounce the interrupt to the
- * guest
- */
- lbz r0, HSTATE_HOST_IPI(r13)
- cmpwi r0, 0
- bne- 1f
+ bl kvmppc_read_intr
+ cmpdi r3, 0
+ bgt ext_interrupt_to_host
/* Allright, looks like an IPI for the guest, we need to set MER */
-3:
/* Check if any CPU is heading out to the host, if so head out too */
ld r5, HSTATE_KVM_VCORE(r13)
lwz r0, VCORE_ENTRY_EXIT(r5)
@@ -936,17 +901,6 @@ do_ext_interrupt:
mtspr SPRN_LPCR, r8
b fast_guest_return
- /* We raced with the host, we need to resend that IPI, bummer */
-1: li r0, IPI_PRIORITY
- stbcix r0, r5, r6 /* set the IPI */
- sync
- b ext_interrupt_to_host
-
-ext_stash_for_host:
- /* It's not an IPI and it's for the host, stash it in the PACA
- * before exit, it will be picked up by the host ICP driver
- */
- stw r3, HSTATE_SAVED_XIRR(r13)
ext_interrupt_to_host:
guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
@@ -1821,6 +1775,71 @@ machine_check_realmode:
b fast_interrupt_c_return
/*
+ * Determine what sort of external interrupt is pending (if any).
+ * Returns:
+ * 0 if no interrupt is pending
+ * 1 if an interrupt is pending that needs to be handled by the host
+ * -1 if there was a guest wakeup IPI (which has now been cleared)
+ */
+kvmppc_read_intr:
+ /* see if a host IPI is pending */
+ li r3, 1
+ lbz r0, HSTATE_HOST_IPI(r13)
+ cmpwi r0, 0
+ bne 1f
+
+ /* Now read the interrupt from the ICP */
+ ld r6, HSTATE_XICS_PHYS(r13)
+ li r7, XICS_XIRR
+ cmpdi r6, 0
+ beq- 1f
+ lwzcix r0, r6, r7
+ rlwinm. r3, r0, 0, 0xffffff
+ sync
+ beq 1f /* if nothing pending in the ICP */
+
+ /* We found something in the ICP...
+ *
+ * If it's not an IPI, stash it in the PACA and return to
+ * the host, we don't (yet) handle directing real external
+ * interrupts directly to the guest
+ */
+ cmpwi r3, XICS_IPI /* if there is, is it an IPI? */
+ li r3, 1
+ bne 42f
+
+ /* It's an IPI, clear the MFRR and EOI it */
+ li r3, 0xff
+ li r8, XICS_MFRR
+ stbcix r3, r6, r8 /* clear the IPI */
+ stwcix r0, r6, r7 /* EOI it */
+ sync
+
+ /* We need to re-check host IPI now in case it got set in the
+ * meantime. If it's clear, we bounce the interrupt to the
+ * guest
+ */
+ lbz r0, HSTATE_HOST_IPI(r13)
+ cmpwi r0, 0
+ bne- 43f
+
+ /* OK, it's an IPI for us */
+ li r3, -1
+1: blr
+
+42: /* It's not an IPI and it's for the host, stash it in the PACA
+ * before exit, it will be picked up by the host ICP driver
+ */
+ stw r0, HSTATE_SAVED_XIRR(r13)
+ b 1b
+
+43: /* We raced with the host, we need to resend that IPI, bummer */
+ li r0, IPI_PRIORITY
+ stbcix r0, r6, r8 /* set the IPI */
+ sync
+ b 1b
+
+/*
* Save away FP, VMX and VSX registers.
* r3 = vcpu pointer
*/
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 10/11] KVM: PPC: Book3S HV: Avoid unbalanced increments of VPA yield count
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (8 preceding siblings ...)
2013-09-06 3:24 ` [PATCH 09/11] KVM: PPC: Book3S HV: Pull out interrupt-reading code into " Paul Mackerras
@ 2013-09-06 3:24 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:25 ` [PATCH 11/11] KVM: PPC: Book3S HV: Return -EINVAL rather than BUG'ing Paul Mackerras
2013-09-11 9:11 ` [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:24 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
The yield count in the VPA is supposed to be incremented every time
we enter the guest, and every time we exit the guest, so that its
value is even when the vcpu is running in the guest and odd when it
isn't. However, it's currently possible that we increment the yield
count on the way into the guest but then find that other CPU threads
are already exiting the guest, so we go back to nap mode via the
secondary_too_late label. In this situation we don't increment the
yield count again, breaking the relationship between the LSB of the
count and whether the vcpu is in the guest.
To fix this, we move the increment of the yield count to a point
after we have checked whether other CPU threads are exiting.
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 01515b6..31030f3 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -401,16 +401,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
/* Save R1 in the PACA */
std r1, HSTATE_HOST_R1(r13)
- /* Increment yield count if they have a VPA */
- ld r3, VCPU_VPA(r4)
- cmpdi r3, 0
- beq 25f
- lwz r5, LPPACA_YIELDCOUNT(r3)
- addi r5, r5, 1
- stw r5, LPPACA_YIELDCOUNT(r3)
- li r6, 1
- stb r6, VCPU_VPA_DIRTY(r4)
-25:
/* Load up DAR and DSISR */
ld r5, VCPU_DAR(r4)
lwz r6, VCPU_DSISR(r4)
@@ -525,6 +515,16 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
mtspr SPRN_RMOR,r8
isync
+ /* Increment yield count if they have a VPA */
+ ld r3, VCPU_VPA(r4)
+ cmpdi r3, 0
+ beq 25f
+ lwz r5, LPPACA_YIELDCOUNT(r3)
+ addi r5, r5, 1
+ stw r5, LPPACA_YIELDCOUNT(r3)
+ li r6, 1
+ stb r6, VCPU_VPA_DIRTY(r4)
+25:
/* Check if HDEC expires soon */
mfspr r3,SPRN_HDEC
cmpwi r3,10
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 11/11] KVM: PPC: Book3S HV: Return -EINVAL rather than BUG'ing
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (9 preceding siblings ...)
2013-09-06 3:24 ` [PATCH 10/11] KVM: PPC: Book3S HV: Avoid unbalanced increments of VPA yield count Paul Mackerras
@ 2013-09-06 3:25 ` Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-11 9:11 ` [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
11 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 3:25 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
From: Michael Ellerman <michael@ellerman.id.au>
This means that if we do happen to get a trap that we don't know
about, we abort the guest rather than crashing the host kernel.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kvm/book3s_hv.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 0bb23a9..731e46e 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -709,8 +709,7 @@ static int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
printk(KERN_EMERG "trap=0x%x | pc=0x%lx | msr=0x%llx\n",
vcpu->arch.trap, kvmppc_get_pc(vcpu),
vcpu->arch.shregs.msr);
- r = RESUME_HOST;
- BUG();
+ r = -EINVAL;
break;
}
--
1.8.4.rc3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7
2013-09-06 3:22 ` [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7 Paul Mackerras
@ 2013-09-06 5:28 ` Aneesh Kumar K.V
2013-09-06 6:38 ` Paul Mackerras
2013-09-13 19:58 ` Alexander Graf
1 sibling, 1 reply; 34+ messages in thread
From: Aneesh Kumar K.V @ 2013-09-06 5:28 UTC (permalink / raw)
To: Paul Mackerras, Alexander Graf, kvm-ppc, kvm
Paul Mackerras <paulus@samba.org> writes:
> This enables us to use the Processor Compatibility Register (PCR) on
> POWER7 to put the processor into architecture 2.05 compatibility mode
> when running a guest. In this mode the new instructions and registers
> that were introduced on POWER7 are disabled in user mode. This
> includes all the VSX facilities plus several other instructions such
> as ldbrx, stdbrx, popcntw, popcntd, etc.
>
> To select this mode, we have a new register accessible through the
> set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT. Setting
> this to zero gives the full set of capabilities of the processor.
> Setting it to one of the "logical" PVR values defined in PAPR puts
> the vcpu into the compatibility mode for the corresponding
> architecture level. The supported values are:
>
> 0x0f000002 Architecture 2.05 (POWER6)
> 0x0f000003 Architecture 2.06 (POWER7)
> 0x0f100003 Architecture 2.06+ (POWER7+)
>
> Since the PCR is per-core, the architecture compatibility level and
> the corresponding PCR value are stored in the struct kvmppc_vcore, and
> are therefore shared between all vcpus in a virtual core.
We already have KVM_SET_SREGS taking pvr as argument. Can't we do
this kvmppc_set_pvr ?. Can you also share the qemu changes ? There I
guess we need to do update the "cpu-version" in the device tree so
that /proc/cpuinfo shows the right information in the guest
-aneesh
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7
2013-09-06 5:28 ` Aneesh Kumar K.V
@ 2013-09-06 6:38 ` Paul Mackerras
0 siblings, 0 replies; 34+ messages in thread
From: Paul Mackerras @ 2013-09-06 6:38 UTC (permalink / raw)
To: Aneesh Kumar K.V; +Cc: Alexander Graf, kvm-ppc, kvm
On Fri, Sep 06, 2013 at 10:58:16AM +0530, Aneesh Kumar K.V wrote:
> Paul Mackerras <paulus@samba.org> writes:
>
> > This enables us to use the Processor Compatibility Register (PCR) on
> > POWER7 to put the processor into architecture 2.05 compatibility mode
> > when running a guest. In this mode the new instructions and registers
> > that were introduced on POWER7 are disabled in user mode. This
> > includes all the VSX facilities plus several other instructions such
> > as ldbrx, stdbrx, popcntw, popcntd, etc.
> >
> > To select this mode, we have a new register accessible through the
> > set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT. Setting
> > this to zero gives the full set of capabilities of the processor.
> > Setting it to one of the "logical" PVR values defined in PAPR puts
> > the vcpu into the compatibility mode for the corresponding
> > architecture level. The supported values are:
> >
> > 0x0f000002 Architecture 2.05 (POWER6)
> > 0x0f000003 Architecture 2.06 (POWER7)
> > 0x0f100003 Architecture 2.06+ (POWER7+)
> >
> > Since the PCR is per-core, the architecture compatibility level and
> > the corresponding PCR value are stored in the struct kvmppc_vcore, and
> > are therefore shared between all vcpus in a virtual core.
>
> We already have KVM_SET_SREGS taking pvr as argument. Can't we do
> this kvmppc_set_pvr ?. Can you also share the qemu changes ? There I
> guess we need to do update the "cpu-version" in the device tree so
> that /proc/cpuinfo shows the right information in the guest
The discussion on the qemu mailing list pointed out that we aren't
really changing the PVR; the guest still sees the real PVR, and what
we're doing is setting a mode of the CPU rather than changing it into
an older CPU. So, it seemed better to use something separate from the
PVR. Also, if we used the pvr setting to convey this information,
then apparently under TCG the guest would see the logical PVR value if
it read the (apparently) real PVR.
Alexey is working on the relevant QEMU patches.
Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 00/11] HV KVM improvements in preparation for POWER8 support
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
` (10 preceding siblings ...)
2013-09-06 3:25 ` [PATCH 11/11] KVM: PPC: Book3S HV: Return -EINVAL rather than BUG'ing Paul Mackerras
@ 2013-09-11 9:11 ` Paul Mackerras
11 siblings, 0 replies; 34+ messages in thread
From: Paul Mackerras @ 2013-09-11 9:11 UTC (permalink / raw)
To: Alexander Graf, kvm-ppc, kvm
On Fri, Sep 06, 2013 at 01:10:03PM +1000, Paul Mackerras wrote:
> This series of patches is based on Alex Graf's kvm-ppc-queue branch.
> It fixes some bugs, makes some more registers accessible through the
> one_reg interface, and implements some missing features such as
> support for the compatibility modes in recent POWER cpus and support
> for the guest having a different timebase origin from the host.
> These patches are all useful on POWER7 and will be needed for good
> POWER8 support.
>
> Please apply.
Alex,
Any comment on these patches? Some of them define new one_reg
register identifiers, and I'm keen to get those nailed down so that we
can start submitting the corresponding QEMU patches.
Thanks,
Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR
2013-09-06 3:21 ` [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR Paul Mackerras
@ 2013-09-13 18:36 ` Alexander Graf
2013-09-14 2:21 ` Paul Mackerras
0 siblings, 1 reply; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 18:36 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:21, Paul Mackerras wrote:
> This adds the ability for userspace to read and write the LPCR
> (Logical Partitioning Control Register) value relating to a guest
> via the GET/SET_ONE_REG interface. There is only one LPCR value
> for the guest, which can be accessed through any vcpu. Userspace
> can only modify the following fields of the LPCR value:
>
> DPFD Default prefetch depth
> ILE Interrupt little-endian
> TC Translation control (secondary HPT hash group search disable)
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
There are 3 things I dislike about this patch :)
1) A vcpu one_reg should only change the state of the vcpu it's targeting. You want a vm wide thing.
2) If anyone gets crazy enough to implement HV emulation in PR KVM this would overlap with the guest's guest LPCR, so we need to name it differently. It's really VM configuration, not a register. Maybe ENABLE_CAP is a better fit?
3) Checkpatch fails:
WARNING: please, no space before tabs
#59: FILE: arch/powerpc/include/asm/reg.h:295:
+#define LPCR_TC ^I0x00000200^I/* Translation control */$
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7
2013-09-06 3:22 ` [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7 Paul Mackerras
2013-09-06 5:28 ` Aneesh Kumar K.V
@ 2013-09-13 19:58 ` Alexander Graf
2013-09-14 2:03 ` Paul Mackerras
1 sibling, 1 reply; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 19:58 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:22, Paul Mackerras wrote:
> This enables us to use the Processor Compatibility Register (PCR) on
> POWER7 to put the processor into architecture 2.05 compatibility mode
> when running a guest. In this mode the new instructions and registers
> that were introduced on POWER7 are disabled in user mode. This
> includes all the VSX facilities plus several other instructions such
> as ldbrx, stdbrx, popcntw, popcntd, etc.
>
> To select this mode, we have a new register accessible through the
> set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT. Setting
> this to zero gives the full set of capabilities of the processor.
> Setting it to one of the "logical" PVR values defined in PAPR puts
> the vcpu into the compatibility mode for the corresponding
> architecture level. The supported values are:
>
> 0x0f000002 Architecture 2.05 (POWER6)
> 0x0f000003 Architecture 2.06 (POWER7)
> 0x0f100003 Architecture 2.06+ (POWER7+)
>
> Since the PCR is per-core, the architecture compatibility level and
> the corresponding PCR value are stored in the struct kvmppc_vcore, and
> are therefore shared between all vcpus in a virtual core.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> Documentation/virtual/kvm/api.txt | 1 +
> arch/powerpc/include/asm/kvm_host.h | 2 ++
> arch/powerpc/include/asm/reg.h | 11 +++++++++++
> arch/powerpc/include/uapi/asm/kvm.h | 3 +++
> arch/powerpc/kernel/asm-offsets.c | 1 +
> arch/powerpc/kvm/book3s_hv.c | 35 +++++++++++++++++++++++++++++++++
> arch/powerpc/kvm/book3s_hv_rmhandlers.S | 11 +++++++++--
> 7 files changed, 62 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 34a32b6..f1f300f 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1837,6 +1837,7 @@ registers, find a list below:
> PPC | KVM_REG_PPC_VRSAVE | 32
> PPC | KVM_REG_PPC_LPCR | 64
> PPC | KVM_REG_PPC_PPR | 64
> + PPC | KVM_REG_PPC_ARCH_COMPAT | 32
> PPC | KVM_REG_PPC_TM_GPR0 | 64
> ...
> PPC | KVM_REG_PPC_TM_GPR31 | 64
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index b0dcd18..5a40270 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -295,6 +295,8 @@ struct kvmppc_vcore {
> u64 preempt_tb;
> struct kvm_vcpu *runner;
> u64 tb_offset; /* guest timebase - host timebase */
> + u32 arch_compat;
> + ulong pcr;
> };
>
> #define VCORE_ENTRY_COUNT(vc) ((vc)->entry_exit_count & 0xff)
> diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
> index 3fc0d06..52ff962 100644
> --- a/arch/powerpc/include/asm/reg.h
> +++ b/arch/powerpc/include/asm/reg.h
> @@ -305,6 +305,10 @@
> #define LPID_RSVD 0x3ff /* Reserved LPID for partn switching */
> #define SPRN_HMER 0x150 /* Hardware m? error recovery */
> #define SPRN_HMEER 0x151 /* Hardware m? enable error recovery */
> +#define SPRN_PCR 0x152 /* Processor compatibility register */
> +#define PCR_VEC_DIS (1ul << (63-0)) /* Vec. disable (pre POWER8) */
> +#define PCR_VSX_DIS (1ul << (63-1)) /* VSX disable (pre POWER8) */
> +#define PCR_ARCH_205 0x2 /* Architecture 2.05 */
> #define SPRN_HEIR 0x153 /* Hypervisor Emulated Instruction Register */
> #define SPRN_TLBINDEXR 0x154 /* P7 TLB control register */
> #define SPRN_TLBVPNR 0x155 /* P7 TLB control register */
> @@ -1095,6 +1099,13 @@
> #define PVR_BE 0x0070
> #define PVR_PA6T 0x0090
>
> +/* "Logical" PVR values defined in PAPR, representing architecture levels */
> +#define PVR_ARCH_204 0x0f000001
> +#define PVR_ARCH_205 0x0f000002
> +#define PVR_ARCH_206 0x0f000003
> +#define PVR_ARCH_206p 0x0f100003
> +#define PVR_ARCH_207 0x0f000004
> +
> /* Macros for setting and retrieving special purpose registers */
> #ifndef __ASSEMBLY__
> #define mfmsr() ({unsigned long rval; \
> diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
> index fab6bc1..e420d46 100644
> --- a/arch/powerpc/include/uapi/asm/kvm.h
> +++ b/arch/powerpc/include/uapi/asm/kvm.h
> @@ -536,6 +536,9 @@ struct kvm_get_htab_header {
> #define KVM_REG_PPC_LPCR (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb5)
> #define KVM_REG_PPC_PPR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb6)
>
> +/* Architecture compatibility level */
> +#define KVM_REG_PPC_ARCH_COMPAT (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb7)
> +
> /* Transactional Memory checkpointed state:
> * This is all GPRs, all VSX regs and a subset of SPRs
> */
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index 5c6ea96..115dd64 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -522,6 +522,7 @@ int main(void)
> DEFINE(VCORE_IN_GUEST, offsetof(struct kvmppc_vcore, in_guest));
> DEFINE(VCORE_NAPPING_THREADS, offsetof(struct kvmppc_vcore, napping_threads));
> DEFINE(VCORE_TB_OFFSET, offsetof(struct kvmppc_vcore, tb_offset));
> + DEFINE(VCORE_PCR, offsetof(struct kvmppc_vcore, pcr));
> DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
> offsetof(struct kvmppc_vcpu_book3s, vcpu));
> DEFINE(VCPU_SLB_E, offsetof(struct kvmppc_slb, orige));
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index eceff7e..1a10afa 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -166,6 +166,35 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
> vcpu->arch.pvr = pvr;
> }
>
> +int kvmppc_set_arch_compat(struct kvm_vcpu *vcpu, u32 arch_compat)
> +{
> + unsigned long pcr = 0;
> + struct kvmppc_vcore *vc = vcpu->arch.vcore;
> +
> + if (arch_compat) {
> + if (!cpu_has_feature(CPU_FTR_ARCH_206))
> + return -EINVAL; /* 970 has no compat mode support */
> +
> + switch (arch_compat) {
> + case PVR_ARCH_205:
> + pcr = PCR_ARCH_205;
> + break;
> + case PVR_ARCH_206:
> + case PVR_ARCH_206p:
> + break;
> + default:
> + return -EINVAL;
> + }
> + }
> +
> + spin_lock(&vc->lock);
> + vc->arch_compat = arch_compat;
> + vc->pcr = pcr;
> + spin_unlock(&vc->lock);
> +
> + return 0;
> +}
> +
> void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
> {
> int r;
> @@ -817,6 +846,9 @@ int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
> case KVM_REG_PPC_PPR:
> *val = get_reg_val(id, vcpu->arch.ppr);
> break;
> + case KVM_REG_PPC_ARCH_COMPAT:
> + *val = get_reg_val(id, vcpu->arch.vcore->arch_compat);
> + break;
> default:
> r = -EINVAL;
> break;
> @@ -927,6 +959,9 @@ int kvmppc_set_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
> case KVM_REG_PPC_PPR:
> vcpu->arch.ppr = set_reg_val(id, *val);
> break;
> + case KVM_REG_PPC_ARCH_COMPAT:
> + r = kvmppc_set_arch_compat(vcpu, set_reg_val(id, *val));
> + break;
> default:
> r = -EINVAL;
> break;
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index 88e7068..023d8600 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -358,7 +358,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
> addis r8,r8,0x100 /* if so, increment upper 40 bits */
> mtspr SPRN_TBU40,r8
>
> -37: li r0,1
> + /* Load guest PCR value to select appropriate compat mode */
> +37: ld r7, VCORE_PCR(r5)
> + mtspr SPRN_PCR, r7
> +
> + li r0,1
> stb r0,VCORE_IN_GUEST(r5) /* signal secondaries to continue */
> b 10f
>
> @@ -984,8 +988,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
> addis r8,r8,0x100 /* if so, increment upper 40 bits */
> mtspr SPRN_TBU40,r8
>
> - /* Signal secondary CPUs to continue */
> + /* Reset PCR */
> 17: li r0,0
> + mtspr SPRN_PCR,r0
How long does writing to PCR take? Is it faster than a load+branch to see whether we actually need it? I would assume the normal fast path is going to be guest cpu == host.
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 11/11] KVM: PPC: Book3S HV: Return -EINVAL rather than BUG'ing
2013-09-06 3:25 ` [PATCH 11/11] KVM: PPC: Book3S HV: Return -EINVAL rather than BUG'ing Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:25, Paul Mackerras wrote:
> From: Michael Ellerman <michael@ellerman.id.au>
>
> This means that if we do happen to get a trap that we don't know
> about, we abort the guest rather than crashing the host kernel.
>
> Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> arch/powerpc/kvm/book3s_hv.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 0bb23a9..731e46e 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -709,8 +709,7 @@ static int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
> printk(KERN_EMERG "trap=0x%x | pc=0x%lx | msr=0x%llx\n",
> vcpu->arch.trap, kvmppc_get_pc(vcpu),
> vcpu->arch.shregs.msr);
> - r = RESUME_HOST;
> - BUG();
> + r = -EINVAL;
This should probably tell user space what's going on. The way x86 handles this is through the kvm_run structure and I think we should try to stay as compatible as possible here:
arch/x86/kvm/svm.c:3489
arch/x86/kvm/vmx.c:6800
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 10/11] KVM: PPC: Book3S HV: Avoid unbalanced increments of VPA yield count
2013-09-06 3:24 ` [PATCH 10/11] KVM: PPC: Book3S HV: Avoid unbalanced increments of VPA yield count Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:24, Paul Mackerras wrote:
> The yield count in the VPA is supposed to be incremented every time
> we enter the guest, and every time we exit the guest, so that its
> value is even when the vcpu is running in the guest and odd when it
> isn't. However, it's currently possible that we increment the yield
> count on the way into the guest but then find that other CPU threads
> are already exiting the guest, so we go back to nap mode via the
> secondary_too_late label. In this situation we don't increment the
> yield count again, breaking the relationship between the LSB of the
> count and whether the vcpu is in the guest.
>
> To fix this, we move the increment of the yield count to a point
> after we have checked whether other CPU threads are exiting.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
Thanks, applied to kvm-ppc-queue.
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 09/11] KVM: PPC: Book3S HV: Pull out interrupt-reading code into a subroutine
2013-09-06 3:24 ` [PATCH 09/11] KVM: PPC: Book3S HV: Pull out interrupt-reading code into " Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:24, Paul Mackerras wrote:
> This moves the code in book3s_hv_rmhandlers.S that reads any pending
> interrupt from the XICS interrupt controller, and works out whether
> it is an IPI for the guest, an IPI for the host, or a device interrupt,
> into a new function called kvmppc_read_intr. Later patches will
> need this.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
Thanks, applied to kvm-ppc-queue.
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 08/11] KVM: PPC: Book3S HV: Restructure kvmppc_hv_entry to be a subroutine
2013-09-06 3:23 ` [PATCH 08/11] KVM: PPC: Book3S HV: Restructure kvmppc_hv_entry to be a subroutine Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:23, Paul Mackerras wrote:
> We have two paths into and out of the low-level guest entry and exit
> code: from a vcpu task via kvmppc_hv_entry_trampoline, and from the
> system reset vector for an offline secondary thread on POWER7 via
> kvm_start_guest. Currently both just branch to kvmppc_hv_entry to
> enter the guest, and on guest exit, we test the vcpu physical thread
> ID to detect which way we came in and thus whether we should return
> to the vcpu task or go back to nap mode.
>
> In order to make the code flow clearer, and to keep the code relating
> to each flow together, this turns kvmppc_hv_entry into a subroutine
> that follows the normal conventions for call and return. This means
> that kvmppc_hv_entry_trampoline() and kvmppc_hv_entry() now establish
> normal stack frames, and we use the normal stack slots for saving
> return addresses rather than local_paca->kvm_hstate.vmhandler. Apart
> from that this is mostly moving code around unchanged.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
Thanks, applied to kvm-ppc-queue.
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/11] KVM: PPC: Book3S HV: Implement H_CONFER
2013-09-06 3:23 ` [PATCH 07/11] KVM: PPC: Book3S HV: Implement H_CONFER Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:23, Paul Mackerras wrote:
> The H_CONFER hypercall is used when a guest vcpu is spinning on a lock
> held by another vcpu which has been preempted, and the spinning vcpu
> wishes to give its timeslice to the lock holder. We implement this
> in the straightforward way using kvm_vcpu_yield_to().
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
Thanks, applied to kvm-ppc-queue.
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register
2013-09-06 3:22 ` [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
2013-09-17 3:29 ` Benjamin Herrenschmidt
1 sibling, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:22, Paul Mackerras wrote:
> POWER7 and later IBM server processors have a register called the
> Program Priority Register (PPR), which controls the priority of
> each hardware CPU SMT thread, and affects how fast it runs compared
> to other SMT threads. This priority can be controlled by writing to
> the PPR or by use of a set of instructions of the form or rN,rN,rN
> which are otherwise no-ops but have been defined to set the priority
> to particular levels.
>
> This adds code to context switch the PPR when entering and exiting
> guests and to make the PPR value accessible through the SET/GET_ONE_REG
> interface. When entering the guest, we set the PPR as late as
> possible, because if we are setting a low thread priority it will
> make the code run slowly from that point on. Similarly, the
> first-level interrupt handlers save the PPR value in the PACA very
> early on, and set the thread priority to the medium level, so that
> the interrupt handling code runs at a reasonable speed.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 03/11] KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVE
2013-09-06 3:18 ` [PATCH 03/11] KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVE Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
2013-09-14 2:07 ` Paul Mackerras
0 siblings, 1 reply; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:18, Paul Mackerras wrote:
> The VRSAVE register value for a vcpu is accessible through the
> GET/SET_SREGS interface for Book E processors, but not for Book 3S
> processors. In order to make this accessible for Book 3S processors,
> this adds a new register identifier for GET/SET_ONE_REG, and adds
> the code to implement it.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> Documentation/virtual/kvm/api.txt | 1 +
> arch/powerpc/include/uapi/asm/kvm.h | 2 ++
> arch/powerpc/kvm/book3s.c | 10 ++++++++++
> 3 files changed, 13 insertions(+)
>
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 9486e5a..c36ff9af 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1834,6 +1834,7 @@ registers, find a list below:
> PPC | KVM_REG_PPC_TCSCR | 64
> PPC | KVM_REG_PPC_PID | 64
> PPC | KVM_REG_PPC_ACOP | 64
> + PPC | KVM_REG_PPC_VRSAVE | 32
> PPC | KVM_REG_PPC_TM_GPR0 | 64
> ...
> PPC | KVM_REG_PPC_TM_GPR31 | 64
> diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
> index a8124fe..b98bf3f 100644
> --- a/arch/powerpc/include/uapi/asm/kvm.h
> +++ b/arch/powerpc/include/uapi/asm/kvm.h
> @@ -532,6 +532,8 @@ struct kvm_get_htab_header {
> #define KVM_REG_PPC_PID (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb2)
> #define KVM_REG_PPC_ACOP (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb3)
>
> +#define KVM_REG_PPC_VRSAVE (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb4)
> +
> /* Transactional Memory checkpointed state:
> * This is all GPRs, all VSX regs and a subset of SPRs
> */
> diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
> index 700df6f..f97369d 100644
> --- a/arch/powerpc/kvm/book3s.c
> +++ b/arch/powerpc/kvm/book3s.c
I don't like how this is available for book3s, but not for booke. In the long run it might make sense to create a generic one_reg handler for shared fields. But in the meantime, could you please just add handling for booke as well? I'll apply the patch in the meantime, but please send another one doing this for booke as well.
Alex
> @@ -528,6 +528,9 @@ int kvm_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg)
> }
> val = get_reg_val(reg->id, vcpu->arch.vscr.u[3]);
> break;
> + case KVM_REG_PPC_VRSAVE:
> + val = get_reg_val(reg->id, vcpu->arch.vrsave);
> + break;
> #endif /* CONFIG_ALTIVEC */
> case KVM_REG_PPC_DEBUG_INST: {
> u32 opcode = INS_TW;
> @@ -605,6 +608,13 @@ int kvm_vcpu_ioctl_set_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg)
> }
> vcpu->arch.vscr.u[3] = set_reg_val(reg->id, val);
> break;
> + case KVM_REG_PPC_VRSAVE:
> + if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
> + r = -ENXIO;
> + break;
> + }
> + vcpu->arch.vrsave = set_reg_val(reg->id, val);
> + break;
> #endif /* CONFIG_ALTIVEC */
> #ifdef CONFIG_KVM_XICS
> case KVM_REG_PPC_ICP_STATE:
> --
> 1.8.4.rc3
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 02/11] KVM: PPC: Book3S HV: Implement timebase offset for guests
2013-09-06 3:17 ` [PATCH 02/11] KVM: PPC: Book3S HV: Implement timebase offset for guests Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:17, Paul Mackerras wrote:
> This allows guests to have a different timebase origin from the host.
> This is needed for migration, where a guest can migrate from one host
> to another and the two hosts might have a different timebase origin.
> However, the timebase seen by the guest must not go backwards, and
> should go forwards only by a small amount corresponding to the time
> taken for the migration.
>
> Therefore this provides a new per-vcpu value accessed via the one_reg
> interface using the new KVM_REG_PPC_TB_OFFSET identifier. This value
> defaults to 0 and is not modified by KVM. On entering the guest, this
> value is added onto the timebase, and on exiting the guest, it is
> subtracted from the timebase.
>
> This is only supported for recent POWER hardware which has the TBU40
> (timebase upper 40 bits) register. Writing to the TBU40 register only
> alters the upper 40 bits of the timebase, leaving the lower 24 bits
> unchanged. This provides a way to modify the timebase for guest
> migration without disturbing the synchronization of the timebase
> registers across CPU cores. The kernel rounds up the value given
> to a multiple of 2^24.
>
> Timebase values stored in KVM structures (struct kvm_vcpu, struct
> kvmppc_vcore, etc.) are stored as host timebase values. The timebase
> values in the dispatch trace log need to be guest timebase values,
> however, since that is read directly by the guest. This moves the
> setting of vcpu->arch.dec_expires on guest exit to a point after we
> have restored the host timebase so that vcpu->arch.dec_expires is a
> host timebase value.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
Thanks, applied to kvm-ppc-queue.
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 01/11] KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers
2013-09-06 3:11 ` [PATCH 01/11] KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers Paul Mackerras
@ 2013-09-13 21:51 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-13 21:51 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm
On 05.09.2013, at 22:11, Paul Mackerras wrote:
> Currently we are not saving and restoring the SIAR and SDAR registers in
> the PMU (performance monitor unit) on guest entry and exit. The result
> is that performance monitoring tools in the guest could get false
> information about where a program was executing and what data it was
> accessing at the time of a performance monitor interrupt. This fixes
> it by saving and restoring these registers along with the other PMU
> registers on guest entry/exit.
>
> This also provides a way for userspace to access these values for a
> vcpu via the one_reg interface.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
Thanks, applied to kvm-ppc-queue.
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7
2013-09-13 19:58 ` Alexander Graf
@ 2013-09-14 2:03 ` Paul Mackerras
0 siblings, 0 replies; 34+ messages in thread
From: Paul Mackerras @ 2013-09-14 2:03 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
On Fri, Sep 13, 2013 at 02:58:43PM -0500, Alexander Graf wrote:
>
> How long does writing to PCR take? Is it faster than a load+branch to see whether we actually need it? I would assume the normal fast path is going to be guest cpu == host.
Seems to be about 30 cycles to write PCR, so a compare and branch is
faster. I'll change it.
Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 03/11] KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVE
2013-09-13 21:51 ` Alexander Graf
@ 2013-09-14 2:07 ` Paul Mackerras
0 siblings, 0 replies; 34+ messages in thread
From: Paul Mackerras @ 2013-09-14 2:07 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
On Fri, Sep 13, 2013 at 04:51:40PM -0500, Alexander Graf wrote:
>
> I don't like how this is available for book3s, but not for booke. In the long run it might make sense to create a generic one_reg handler for shared fields. But in the meantime, could you please just add handling for booke as well? I'll apply the patch in the meantime, but please send another one doing this for booke as well.
Sure, will do.
Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR
2013-09-13 18:36 ` Alexander Graf
@ 2013-09-14 2:21 ` Paul Mackerras
2013-09-14 5:12 ` Alexander Graf
0 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-14 2:21 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
On Fri, Sep 13, 2013 at 01:36:25PM -0500, Alexander Graf wrote:
>
> On 05.09.2013, at 22:21, Paul Mackerras wrote:
>
> > This adds the ability for userspace to read and write the LPCR
> > (Logical Partitioning Control Register) value relating to a guest
> > via the GET/SET_ONE_REG interface. There is only one LPCR value
> > for the guest, which can be accessed through any vcpu. Userspace
> > can only modify the following fields of the LPCR value:
> >
> > DPFD Default prefetch depth
> > ILE Interrupt little-endian
> > TC Translation control (secondary HPT hash group search disable)
> >
> > Signed-off-by: Paul Mackerras <paulus@samba.org>
>
> There are 3 things I dislike about this patch :)
>
> 1) A vcpu one_reg should only change the state of the vcpu it's targeting. You want a vm wide thing.
In hardware there is in fact an LPCR per hardware CPU thread, though
the architecture says that on threaded processors many of the fields
have to be the same on each thread, including the three fields
mentioned here.
> 2) If anyone gets crazy enough to implement HV emulation in PR KVM this would overlap with the guest's guest LPCR, so we need to name it differently. It's really VM configuration, not a register. Maybe ENABLE_CAP is a better fit?
If we were doing HV emulation in PR KVM then there would still only be
one LPCR per guest vcpu. If there was a hypervisor running as the PR
guest, it would be its job to context switch the LPCR between its
guests. So I don't see why there would be any need for the top-level
KVM to know about the guest's guest LPCR.
In that situation we would need a one_reg to get and set the guest's
LPCR, which would be per vcpu, and there would be no restriction on
which bits could be set by the host.
It sounds to me like the best option would be to keep an LPCR per vcpu
and use the one_reg interface to let the host get and set it. The
actual LPCR applied when the guest runs would use some bits from the
per-vcpu value plus some bits set by the KVM host code (for the host's
protection). How does that sound?
Alternatively I could define a new per-VM ioctl. ENABLE_CAP doesn't
really help since it also is per-vcpu, not per-VM.
> 3) Checkpatch fails:
>
> WARNING: please, no space before tabs
> #59: FILE: arch/powerpc/include/asm/reg.h:295:
> +#define LPCR_TC ^I0x00000200^I/* Translation control */$
Oops, will fix.
Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR
2013-09-14 2:21 ` Paul Mackerras
@ 2013-09-14 5:12 ` Alexander Graf
2013-09-14 5:58 ` Paul Mackerras
0 siblings, 1 reply; 34+ messages in thread
From: Alexander Graf @ 2013-09-14 5:12 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org
Am 13.09.2013 um 21:21 schrieb Paul Mackerras <paulus@samba.org>:
> On Fri, Sep 13, 2013 at 01:36:25PM -0500, Alexander Graf wrote:
>>
>> On 05.09.2013, at 22:21, Paul Mackerras wrote:
>>
>>> This adds the ability for userspace to read and write the LPCR
>>> (Logical Partitioning Control Register) value relating to a guest
>>> via the GET/SET_ONE_REG interface. There is only one LPCR value
>>> for the guest, which can be accessed through any vcpu. Userspace
>>> can only modify the following fields of the LPCR value:
>>>
>>> DPFD Default prefetch depth
>>> ILE Interrupt little-endian
>>> TC Translation control (secondary HPT hash group search disable)
>>>
>>> Signed-off-by: Paul Mackerras <paulus@samba.org>
>>
>> There are 3 things I dislike about this patch :)
>>
>> 1) A vcpu one_reg should only change the state of the vcpu it's targeting. You want a vm wide thing.
>
> In hardware there is in fact an LPCR per hardware CPU thread, though
> the architecture says that on threaded processors many of the fields
> have to be the same on each thread, including the three fields
> mentioned here.
Ok. Is it mandatory to be identical per core? Or per vm?
>
>> 2) If anyone gets crazy enough to implement HV emulation in PR KVM this would overlap with the guest's guest LPCR, so we need to name it differently. It's really VM configuration, not a register. Maybe ENABLE_CAP is a better fit?
>
> If we were doing HV emulation in PR KVM then there would still only be
> one LPCR per guest vcpu. If there was a hypervisor running as the PR
> guest, it would be its job to context switch the LPCR between its
> guests. So I don't see why there would be any need for the top-level
> KVM to know about the guest's guest LPCR.
>
> In that situation we would need a one_reg to get and set the guest's
> LPCR, which would be per vcpu, and there would be no restriction on
> which bits could be set by the host.
Thinking about this a bit more I think you might be correct.
>
> It sounds to me like the best option would be to keep an LPCR per vcpu
> and use the one_reg interface to let the host get and set it. The
> actual LPCR applied when the guest runs would use some bits from the
> per-vcpu value plus some bits set by the KVM host code (for the host's
> protection). How does that sound?
Sounds similar to what you have, just that it's on vcpu level now :). I think that would work, yeah.
Alex
>
> Alternatively I could define a new per-VM ioctl. ENABLE_CAP doesn't
> really help since it also is per-vcpu, not per-VM.
>
>> 3) Checkpatch fails:
>>
>> WARNING: please, no space before tabs
>> #59: FILE: arch/powerpc/include/asm/reg.h:295:
>> +#define LPCR_TC ^I0x00000200^I/* Translation control */$
>
> Oops, will fix.
>
> Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR
2013-09-14 5:12 ` Alexander Graf
@ 2013-09-14 5:58 ` Paul Mackerras
2013-09-14 11:38 ` Alexander Graf
0 siblings, 1 reply; 34+ messages in thread
From: Paul Mackerras @ 2013-09-14 5:58 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org
On Sat, Sep 14, 2013 at 12:12:43AM -0500, Alexander Graf wrote:
>
>
> Am 13.09.2013 um 21:21 schrieb Paul Mackerras <paulus@samba.org>:
>
> > In hardware there is in fact an LPCR per hardware CPU thread, though
> > the architecture says that on threaded processors many of the fields
> > have to be the same on each thread, including the three fields
> > mentioned here.
>
> Ok. Is it mandatory to be identical per core? Or per vm?
Per core.
Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR
2013-09-14 5:58 ` Paul Mackerras
@ 2013-09-14 11:38 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-14 11:38 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org
Am 14.09.2013 um 00:58 schrieb Paul Mackerras <paulus@samba.org>:
> On Sat, Sep 14, 2013 at 12:12:43AM -0500, Alexander Graf wrote:
>>
>>
>> Am 13.09.2013 um 21:21 schrieb Paul Mackerras <paulus@samba.org>:
>>
>>> In hardware there is in fact an LPCR per hardware CPU thread, though
>>> the architecture says that on threaded processors many of the fields
>>> have to be the same on each thread, including the three fields
>>> mentioned here.
>>
>> Ok. Is it mandatory to be identical per core? Or per vm?
>
> Per core.
Then put it into the vcore struct like you put everything else with core wide scope, no? :)
Alex
>
> Paul.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register
2013-09-06 3:22 ` [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
@ 2013-09-17 3:29 ` Benjamin Herrenschmidt
2013-09-20 3:39 ` Alexander Graf
1 sibling, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2013-09-17 3:29 UTC (permalink / raw)
To: Paul Mackerras; +Cc: Alexander Graf, kvm-ppc, kvm
On Fri, 2013-09-06 at 13:22 +1000, Paul Mackerras wrote:
> POWER7 and later IBM server processors have a register called the
> Program Priority Register (PPR), which controls the priority of
> each hardware CPU SMT thread, and affects how fast it runs compared
> to other SMT threads. This priority can be controlled by writing to
> the PPR or by use of a set of instructions of the form or rN,rN,rN
> which are otherwise no-ops but have been defined to set the priority
> to particular levels.
>
> This adds code to context switch the PPR when entering and exiting
> guests and to make the PPR value accessible through the SET/GET_ONE_REG
> interface. When entering the guest, we set the PPR as late as
> possible, because if we are setting a low thread priority it will
> make the code run slowly from that point on. Similarly, the
> first-level interrupt handlers save the PPR value in the PACA very
> early on, and set the thread priority to the medium level, so that
> the interrupt handling code runs at a reasonable speed.
>
> Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Alex, can you take this via your tree ?
Cheers,
Ben.
> ---
> Documentation/virtual/kvm/api.txt | 1 +
> arch/powerpc/include/asm/exception-64s.h | 8 ++++++++
> arch/powerpc/include/asm/kvm_book3s_asm.h | 1 +
> arch/powerpc/include/asm/kvm_host.h | 1 +
> arch/powerpc/include/uapi/asm/kvm.h | 1 +
> arch/powerpc/kernel/asm-offsets.c | 2 ++
> arch/powerpc/kvm/book3s_hv.c | 6 ++++++
> arch/powerpc/kvm/book3s_hv_rmhandlers.S | 12 +++++++++++-
> 8 files changed, 31 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 1030ac9..34a32b6 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1836,6 +1836,7 @@ registers, find a list below:
> PPC | KVM_REG_PPC_ACOP | 64
> PPC | KVM_REG_PPC_VRSAVE | 32
> PPC | KVM_REG_PPC_LPCR | 64
> + PPC | KVM_REG_PPC_PPR | 64
> PPC | KVM_REG_PPC_TM_GPR0 | 64
> ...
> PPC | KVM_REG_PPC_TM_GPR31 | 64
> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
> index 07ca627..b86c4db 100644
> --- a/arch/powerpc/include/asm/exception-64s.h
> +++ b/arch/powerpc/include/asm/exception-64s.h
> @@ -203,6 +203,10 @@ do_kvm_##n: \
> ld r10,area+EX_CFAR(r13); \
> std r10,HSTATE_CFAR(r13); \
> END_FTR_SECTION_NESTED(CPU_FTR_CFAR,CPU_FTR_CFAR,947); \
> + BEGIN_FTR_SECTION_NESTED(948) \
> + ld r10,area+EX_PPR(r13); \
> + std r10,HSTATE_PPR(r13); \
> + END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \
> ld r10,area+EX_R10(r13); \
> stw r9,HSTATE_SCRATCH1(r13); \
> ld r9,area+EX_R9(r13); \
> @@ -216,6 +220,10 @@ do_kvm_##n: \
> ld r10,area+EX_R10(r13); \
> beq 89f; \
> stw r9,HSTATE_SCRATCH1(r13); \
> + BEGIN_FTR_SECTION_NESTED(948) \
> + ld r9,area+EX_PPR(r13); \
> + std r9,HSTATE_PPR(r13); \
> + END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \
> ld r9,area+EX_R9(r13); \
> std r12,HSTATE_SCRATCH0(r13); \
> li r12,n; \
> diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
> index 9039d3c..22f4606 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_asm.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
> @@ -101,6 +101,7 @@ struct kvmppc_host_state {
> #endif
> #ifdef CONFIG_PPC_BOOK3S_64
> u64 cfar;
> + u64 ppr;
> #endif
> };
>
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index 9741bf0..b0dcd18 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -464,6 +464,7 @@ struct kvm_vcpu_arch {
> u32 ctrl;
> ulong dabr;
> ulong cfar;
> + ulong ppr;
> #endif
> u32 vrsave; /* also USPRG0 */
> u32 mmucr;
> diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
> index e42127d..fab6bc1 100644
> --- a/arch/powerpc/include/uapi/asm/kvm.h
> +++ b/arch/powerpc/include/uapi/asm/kvm.h
> @@ -534,6 +534,7 @@ struct kvm_get_htab_header {
>
> #define KVM_REG_PPC_VRSAVE (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb4)
> #define KVM_REG_PPC_LPCR (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xb5)
> +#define KVM_REG_PPC_PPR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xb6)
>
> /* Transactional Memory checkpointed state:
> * This is all GPRs, all VSX regs and a subset of SPRs
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index ccb42cd..5c6ea96 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -516,6 +516,7 @@ int main(void)
> DEFINE(VCPU_TRAP, offsetof(struct kvm_vcpu, arch.trap));
> DEFINE(VCPU_PTID, offsetof(struct kvm_vcpu, arch.ptid));
> DEFINE(VCPU_CFAR, offsetof(struct kvm_vcpu, arch.cfar));
> + DEFINE(VCPU_PPR, offsetof(struct kvm_vcpu, arch.ppr));
> DEFINE(VCORE_ENTRY_EXIT, offsetof(struct kvmppc_vcore, entry_exit_count));
> DEFINE(VCORE_NAP_COUNT, offsetof(struct kvmppc_vcore, nap_count));
> DEFINE(VCORE_IN_GUEST, offsetof(struct kvmppc_vcore, in_guest));
> @@ -600,6 +601,7 @@ int main(void)
>
> #ifdef CONFIG_PPC_BOOK3S_64
> HSTATE_FIELD(HSTATE_CFAR, cfar);
> + HSTATE_FIELD(HSTATE_PPR, ppr);
> #endif /* CONFIG_PPC_BOOK3S_64 */
>
> #else /* CONFIG_PPC_BOOK3S */
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 9c878d7..eceff7e 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -814,6 +814,9 @@ int kvmppc_get_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
> case KVM_REG_PPC_LPCR:
> *val = get_reg_val(id, vcpu->kvm->arch.lpcr);
> break;
> + case KVM_REG_PPC_PPR:
> + *val = get_reg_val(id, vcpu->arch.ppr);
> + break;
> default:
> r = -EINVAL;
> break;
> @@ -921,6 +924,9 @@ int kvmppc_set_one_reg(struct kvm_vcpu *vcpu, u64 id, union kvmppc_one_reg *val)
> case KVM_REG_PPC_LPCR:
> kvmppc_set_lpcr(vcpu, set_reg_val(id, *val));
> break;
> + case KVM_REG_PPC_PPR:
> + vcpu->arch.ppr = set_reg_val(id, *val);
> + break;
> default:
> r = -EINVAL;
> break;
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index 85f8dd0..88e7068 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -561,13 +561,15 @@ BEGIN_FTR_SECTION
> ld r5, VCPU_CFAR(r4)
> mtspr SPRN_CFAR, r5
> END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
> +BEGIN_FTR_SECTION
> + ld r0, VCPU_PPR(r4)
> +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>
> ld r5, VCPU_LR(r4)
> lwz r6, VCPU_CR(r4)
> mtlr r5
> mtcr r6
>
> - ld r0, VCPU_GPR(R0)(r4)
> ld r1, VCPU_GPR(R1)(r4)
> ld r2, VCPU_GPR(R2)(r4)
> ld r3, VCPU_GPR(R3)(r4)
> @@ -581,6 +583,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
> ld r12, VCPU_GPR(R12)(r4)
> ld r13, VCPU_GPR(R13)(r4)
>
> +BEGIN_FTR_SECTION
> + mtspr SPRN_PPR, r0
> +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
> + ld r0, VCPU_GPR(R0)(r4)
> ld r4, VCPU_GPR(R4)(r4)
>
> hrfid
> @@ -631,6 +637,10 @@ BEGIN_FTR_SECTION
> ld r3, HSTATE_CFAR(r13)
> std r3, VCPU_CFAR(r9)
> END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
> +BEGIN_FTR_SECTION
> + ld r4, HSTATE_PPR(r13)
> + std r4, VCPU_PPR(r9)
> +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>
> /* Restore R1/R2 so we can handle faults */
> ld r1, HSTATE_HOST_R1(r13)
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register
2013-09-17 3:29 ` Benjamin Herrenschmidt
@ 2013-09-20 3:39 ` Alexander Graf
0 siblings, 0 replies; 34+ messages in thread
From: Alexander Graf @ 2013-09-20 3:39 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: Paul Mackerras, kvm-ppc, kvm
On 16.09.2013, at 22:29, Benjamin Herrenschmidt wrote:
> On Fri, 2013-09-06 at 13:22 +1000, Paul Mackerras wrote:
>> POWER7 and later IBM server processors have a register called the
>> Program Priority Register (PPR), which controls the priority of
>> each hardware CPU SMT thread, and affects how fast it runs compared
>> to other SMT threads. This priority can be controlled by writing to
>> the PPR or by use of a set of instructions of the form or rN,rN,rN
>> which are otherwise no-ops but have been defined to set the priority
>> to particular levels.
>>
>> This adds code to context switch the PPR when entering and exiting
>> guests and to make the PPR value accessible through the SET/GET_ONE_REG
>> interface. When entering the guest, we set the PPR as late as
>> possible, because if we are setting a low thread priority it will
>> make the code run slowly from that point on. Similarly, the
>> first-level interrupt handlers save the PPR value in the PACA very
>> early on, and set the thread priority to the medium level, so that
>> the interrupt handling code runs at a reasonable speed.
>>
>> Signed-off-by: Paul Mackerras <paulus@samba.org>
>
> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>
> Alex, can you take this via your tree ?
Yes, on the next respin :). Or is this one urgent?
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2013-09-20 3:39 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-06 3:10 [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
2013-09-06 3:11 ` [PATCH 01/11] KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:17 ` [PATCH 02/11] KVM: PPC: Book3S HV: Implement timebase offset for guests Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:18 ` [PATCH 03/11] KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVE Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-14 2:07 ` Paul Mackerras
2013-09-06 3:21 ` [PATCH 04/11] KVM: PPC: Book3S HV: Add GET/SET_ONE_REG interface for LPCR Paul Mackerras
2013-09-13 18:36 ` Alexander Graf
2013-09-14 2:21 ` Paul Mackerras
2013-09-14 5:12 ` Alexander Graf
2013-09-14 5:58 ` Paul Mackerras
2013-09-14 11:38 ` Alexander Graf
2013-09-06 3:22 ` [PATCH 05/11] KVM: PPC: Book3S HV: Add support for guest Program Priority Register Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-17 3:29 ` Benjamin Herrenschmidt
2013-09-20 3:39 ` Alexander Graf
2013-09-06 3:22 ` [PATCH 06/11] KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7 Paul Mackerras
2013-09-06 5:28 ` Aneesh Kumar K.V
2013-09-06 6:38 ` Paul Mackerras
2013-09-13 19:58 ` Alexander Graf
2013-09-14 2:03 ` Paul Mackerras
2013-09-06 3:23 ` [PATCH 07/11] KVM: PPC: Book3S HV: Implement H_CONFER Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:23 ` [PATCH 08/11] KVM: PPC: Book3S HV: Restructure kvmppc_hv_entry to be a subroutine Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:24 ` [PATCH 09/11] KVM: PPC: Book3S HV: Pull out interrupt-reading code into " Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:24 ` [PATCH 10/11] KVM: PPC: Book3S HV: Avoid unbalanced increments of VPA yield count Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-06 3:25 ` [PATCH 11/11] KVM: PPC: Book3S HV: Return -EINVAL rather than BUG'ing Paul Mackerras
2013-09-13 21:51 ` Alexander Graf
2013-09-11 9:11 ` [PATCH 00/11] HV KVM improvements in preparation for POWER8 support Paul Mackerras
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).