public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Some fixes and improvements for PR KVM
@ 2013-06-22  7:12 Paul Mackerras
  2013-06-22  7:13 ` [PATCH 1/5] KVM: PPC: Book3S PR: Fix proto-VSID calculations Paul Mackerras
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Paul Mackerras @ 2013-06-22  7:12 UTC (permalink / raw)
  To: Alexander Graf, kvm-ppc; +Cc: kvm

This series of 5 patches is against the KVM next branch.  It fixes
some bugs in PR-style KVM on Book 3S PPC and adds support for the
guest using 1TB segments as well as 256MB segments.  My ultimate goal
is to make it possible to configure both HV and PR KVM into the same
kernel binary, and this is just the first few steps.

Paul.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/5] KVM: PPC: Book3S PR: Fix proto-VSID calculations
  2013-06-22  7:12 [PATCH 0/5] Some fixes and improvements for PR KVM Paul Mackerras
@ 2013-06-22  7:13 ` Paul Mackerras
  2013-06-22  7:14 ` [PATCH 2/5] KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry Paul Mackerras
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Paul Mackerras @ 2013-06-22  7:13 UTC (permalink / raw)
  To: Alexander Graf, kvm-ppc; +Cc: kvm

This makes sure the calculation of the proto-VSIDs used by PR KVM
is done with 64-bit arithmetic.  Since vcpu3s->context_id[] is int,
when we do vcpu3s->context_id[0] << ESID_BITS the shift will be done
with 32-bit instructions, possibly leading to significant bits
getting lost, as the context id can be up to 524283 and ESID_BITS is
18.  To fix this we cast the context id to u64 before shifting.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 3a9a1ac..2c6e7ee 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -325,9 +325,9 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 		return -1;
 	vcpu3s->context_id[0] = err;
 
-	vcpu3s->proto_vsid_max = ((vcpu3s->context_id[0] + 1)
+	vcpu3s->proto_vsid_max = ((u64)(vcpu3s->context_id[0] + 1)
 				  << ESID_BITS) - 1;
-	vcpu3s->proto_vsid_first = vcpu3s->context_id[0] << ESID_BITS;
+	vcpu3s->proto_vsid_first = (u64)vcpu3s->context_id[0] << ESID_BITS;
 	vcpu3s->proto_vsid_next = vcpu3s->proto_vsid_first;
 
 	kvmppc_mmu_hpte_init(vcpu);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/5] KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry
  2013-06-22  7:12 [PATCH 0/5] Some fixes and improvements for PR KVM Paul Mackerras
  2013-06-22  7:13 ` [PATCH 1/5] KVM: PPC: Book3S PR: Fix proto-VSID calculations Paul Mackerras
@ 2013-06-22  7:14 ` Paul Mackerras
  2013-06-22  7:14 ` [PATCH 3/5] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match Paul Mackerras
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Paul Mackerras @ 2013-06-22  7:14 UTC (permalink / raw)
  To: Alexander Graf, kvm-ppc; +Cc: kvm

On entering a PR KVM guest, we invalidate the whole SLB before loading
up the guest entries.  We do this using an slbia instruction, which
invalidates all entries except entry 0, followed by an slbie to
invalidate entry 0.  However, the slbie turns out to be ineffective
in some circumstances (specifically when the host linear mapping uses
64k pages) because of errors in computing the parameter to the slbie.
The result is that the guest kernel hangs very early in boot because
it takes a DSI the first time it tries to access kernel data using
a linear mapping address in real mode.

Currently we construct bits 36 - 43 (big-endian numbering) of the slbie
parameter by taking bits 56 - 63 of the SLB VSID doubleword.  These bits
for the tlbie are C (class, 1 bit), B (segment size, 2 bits) and 5
reserved bits.  For the SLB VSID doubleword these are C (class, 1 bit),
reserved (1 bit), LP (large page size, 2 bits), and 4 reserved bits.
Thus we are not setting the B field correctly, and when LP = 01 as
it is for 64k pages, we are setting a reserved bit.

Rather than add more instructions to calculate the slbie parameter
correctly, this takes a simpler approach, which is to set entry 0 to
zeroes explicitly.  Normally slbmte should not be used to invalidate
an entry, since it doesn't invalidate the ERATs, but it is OK to use
it to invalidate an entry if it is immediately followed by slbia,
which does invalidate the ERATs.  (This has been confirmed with the
Power architects.)  This approach takes fewer instructions and will
work whatever the contents of entry 0.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_64_slb.S | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 56b983e..4f0caec 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -66,10 +66,6 @@ slb_exit_skip_ ## num:
 
 	ld	r12, PACA_SLBSHADOWPTR(r13)
 
-	/* Save off the first entry so we can slbie it later */
-	ld	r10, SHADOW_SLB_ESID(0)(r12)
-	ld	r11, SHADOW_SLB_VSID(0)(r12)
-
 	/* Remove bolted entries */
 	UNBOLT_SLB_ENTRY(0)
 	UNBOLT_SLB_ENTRY(1)
@@ -81,15 +77,10 @@ slb_exit_skip_ ## num:
 
 	/* Flush SLB */
 
+	li	r10, 0
+	slbmte	r10, r10
 	slbia
 
-	/* r0 = esid & ESID_MASK */
-	rldicr  r10, r10, 0, 35
-	/* r0 |= CLASS_BIT(VSID) */
-	rldic   r12, r11, 56 - 36, 36
-	or      r10, r10, r12
-	slbie	r10
-
 	/* Fill SLB with our shadow */
 
 	lbz	r12, SVCPU_SLB_MAX(r3)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/5] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match
  2013-06-22  7:12 [PATCH 0/5] Some fixes and improvements for PR KVM Paul Mackerras
  2013-06-22  7:13 ` [PATCH 1/5] KVM: PPC: Book3S PR: Fix proto-VSID calculations Paul Mackerras
  2013-06-22  7:14 ` [PATCH 2/5] KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry Paul Mackerras
@ 2013-06-22  7:14 ` Paul Mackerras
  2013-06-22 17:42   ` Alexander Graf
  2013-06-22  7:15 ` [PATCH 4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly Paul Mackerras
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Paul Mackerras @ 2013-06-22  7:14 UTC (permalink / raw)
  To: Alexander Graf, kvm-ppc; +Cc: kvm

The loop in kvmppc_mmu_book3s_64_xlate() that looks up a translation
in the guest hashed page table (HPT) keeps going if it finds an
HPTE that matches but doesn't allow access.  This is incorrect; it
is different from what the hardware does, and there should never be
more than one matching HPTE anyway.  This fixes it to stop when any
matching HPTE is found.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_64_mmu.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index b871721..2e93bb5 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -167,7 +167,6 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
 	int i;
 	u8 key = 0;
 	bool found = false;
-	bool perm_err = false;
 	int second = 0;
 	ulong mp_ea = vcpu->arch.magic_page_ea;
 
@@ -248,11 +247,6 @@ do_second:
 				break;
 			}
 
-			if (!gpte->may_read) {
-				perm_err = true;
-				continue;
-			}
-
 			dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
 				"-> 0x%lx\n",
 				eaddr, avpn, gpte->vpage, gpte->raddr);
@@ -281,6 +275,8 @@ do_second:
 		if (pteg[i+1] != oldr)
 			copy_to_user((void __user *)ptegp, pteg, sizeof(pteg));
 
+		if (!gpte->may_read)
+			return -EPERM;
 		return 0;
 	} else {
 		dprintk("KVM MMU: No PTE found (ea=0x%lx sdr1=0x%llx "
@@ -296,13 +292,7 @@ do_second:
 		}
 	}
 
-
 no_page_found:
-
-
-	if (perm_err)
-		return -EPERM;
-
 	return -ENOENT;
 
 no_seg_found:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly
  2013-06-22  7:12 [PATCH 0/5] Some fixes and improvements for PR KVM Paul Mackerras
                   ` (2 preceding siblings ...)
  2013-06-22  7:14 ` [PATCH 3/5] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match Paul Mackerras
@ 2013-06-22  7:15 ` Paul Mackerras
  2013-06-22 17:48   ` Alexander Graf
  2013-06-22  7:16 ` [PATCH 5/5] KVM: PPC: Book3S PR: Allow guest to use 1TB segments Paul Mackerras
  2013-06-22 17:57 ` [PATCH 0/5] Some fixes and improvements for PR KVM Alexander Graf
  5 siblings, 1 reply; 11+ messages in thread
From: Paul Mackerras @ 2013-06-22  7:15 UTC (permalink / raw)
  To: Alexander Graf, kvm-ppc; +Cc: kvm

At present, if the guest creates a valid SLB (segment lookaside buffer)
entry with the slbmte instruction, then invalidates it with the slbie
instruction, then reads the entry with the slbmfee/slbmfev instructions,
the result of the slbmfee will have the valid bit set, even though the
entry is not actually considered valid by the host.  This is confusing,
if not worse.  This fixes it by zeroing out the orige and origv fields
of the SLB entry structure when the entry is invalidated.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_64_mmu.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 2e93bb5..7519124 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -376,6 +376,8 @@ static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
 	dprintk("KVM MMU: slbie(0x%llx, 0x%llx)\n", ea, slbe->esid);
 
 	slbe->valid = false;
+	slbe->orige = 0;
+	slbe->origv = 0;
 
 	kvmppc_mmu_map_segment(vcpu, ea);
 }
@@ -386,8 +388,11 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
 
 	dprintk("KVM MMU: slbia()\n");
 
-	for (i = 1; i < vcpu->arch.slb_nr; i++)
+	for (i = 1; i < vcpu->arch.slb_nr; i++) {
 		vcpu->arch.slb[i].valid = false;
+		vcpu->arch.slb[i].orige = 0;
+		vcpu->arch.slb[i].origv = 0;
+	}
 
 	if (vcpu->arch.shared->msr & MSR_IR) {
 		kvmppc_mmu_flush_segments(vcpu);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/5] KVM: PPC: Book3S PR: Allow guest to use 1TB segments
  2013-06-22  7:12 [PATCH 0/5] Some fixes and improvements for PR KVM Paul Mackerras
                   ` (3 preceding siblings ...)
  2013-06-22  7:15 ` [PATCH 4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly Paul Mackerras
@ 2013-06-22  7:16 ` Paul Mackerras
  2013-06-22 17:57 ` [PATCH 0/5] Some fixes and improvements for PR KVM Alexander Graf
  5 siblings, 0 replies; 11+ messages in thread
From: Paul Mackerras @ 2013-06-22  7:16 UTC (permalink / raw)
  To: Alexander Graf, kvm-ppc; +Cc: kvm

With this, the guest can use 1TB segments as well as 256MB segments.
Since we now have the situation where a single emulated guest segment
could correspond to multiple shadow segments (as the shadow segments
are still 256MB segments), this adds a new kvmppc_mmu_flush_segment()
to scan for all shadow segments that need to be removed.

This restructures the guest HPT (hashed page table) lookup code to
use the correct hashing and matching functions for HPTEs within a
1TB segment.  We use the standard hpt_hash() function instead of
open-coding the hash calculation, and we use HPTE_V_COMPARE() with
an AVPN value that has the B (segment size) field included.  The
calculation of avpn is done a little earlier since it doesn't change
in the loop starting at the do_second label.

The computation in kvmppc_mmu_book3s_64_esid_to_vsid() changes so that
it returns a 256MB VSID even if the guest SLB entry is a 1TB entry.
This is because the users of this function are creating 256MB SLB
entries.  We set a new VSID_1T flag so that entries created from 1T
segments don't collide with entries from 256MB segments.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |  6 ++--
 arch/powerpc/kvm/book3s_64_mmu.c      | 60 +++++++++++++++++++++++++----------
 arch/powerpc/kvm/book3s_64_mmu_host.c | 17 ++++++++++
 arch/powerpc/kvm/book3s_pr.c          |  3 +-
 4 files changed, 66 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 349ed85..08891d0 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -107,8 +107,9 @@ struct kvmppc_vcpu_book3s {
 #define CONTEXT_GUEST		1
 #define CONTEXT_GUEST_END	2
 
-#define VSID_REAL	0x1fffffffffc00000ULL
-#define VSID_BAT	0x1fffffffffb00000ULL
+#define VSID_REAL	0x0fffffffffc00000ULL
+#define VSID_BAT	0x0fffffffffb00000ULL
+#define VSID_1T		0x1000000000000000ULL
 #define VSID_REAL_DR	0x2000000000000000ULL
 #define VSID_REAL_IR	0x4000000000000000ULL
 #define VSID_PR		0x8000000000000000ULL
@@ -123,6 +124,7 @@ extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu);
 extern void kvmppc_mmu_book3s_hv_init(struct kvm_vcpu *vcpu);
 extern int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte);
 extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr);
+extern void kvmppc_mmu_flush_segment(struct kvm_vcpu *vcpu, ulong eaddr, ulong seg_size);
 extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu);
 extern int kvmppc_book3s_hv_page_fault(struct kvm_run *run,
 			struct kvm_vcpu *vcpu, unsigned long addr,
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 7519124..739bfba 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -26,6 +26,7 @@
 #include <asm/tlbflush.h>
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
+#include <asm/mmu-hash64.h>
 
 /* #define DEBUG_MMU */
 
@@ -76,6 +77,24 @@ static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe(
 	return NULL;
 }
 
+static int kvmppc_slb_sid_shift(struct kvmppc_slb *slbe)
+{
+	return slbe->tb ? SID_SHIFT_1T : SID_SHIFT;
+}
+
+static u64 kvmppc_slb_offset_mask(struct kvmppc_slb *slbe)
+{
+	return (1ul << kvmppc_slb_sid_shift(slbe)) - 1;
+}
+
+static u64 kvmppc_slb_calc_vpn(struct kvmppc_slb *slb, gva_t eaddr)
+{
+	eaddr &= kvmppc_slb_offset_mask(slb);
+
+	return (eaddr >> VPN_SHIFT) |
+		((slb->vsid) << (kvmppc_slb_sid_shift(slb) - VPN_SHIFT));
+}
+
 static u64 kvmppc_mmu_book3s_64_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
 					 bool data)
 {
@@ -85,11 +104,7 @@ static u64 kvmppc_mmu_book3s_64_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
 	if (!slb)
 		return 0;
 
-	if (slb->tb)
-		return (((u64)eaddr >> 12) & 0xfffffff) |
-		       (((u64)slb->vsid) << 28);
-
-	return (((u64)eaddr >> 12) & 0xffff) | (((u64)slb->vsid) << 16);
+	return kvmppc_slb_calc_vpn(slb, eaddr);
 }
 
 static int kvmppc_mmu_book3s_64_get_pagesize(struct kvmppc_slb *slbe)
@@ -100,7 +115,8 @@ static int kvmppc_mmu_book3s_64_get_pagesize(struct kvmppc_slb *slbe)
 static u32 kvmppc_mmu_book3s_64_get_page(struct kvmppc_slb *slbe, gva_t eaddr)
 {
 	int p = kvmppc_mmu_book3s_64_get_pagesize(slbe);
-	return ((eaddr & 0xfffffff) >> p);
+
+	return ((eaddr & kvmppc_slb_offset_mask(slbe)) >> p);
 }
 
 static hva_t kvmppc_mmu_book3s_64_get_pteg(
@@ -109,13 +125,15 @@ static hva_t kvmppc_mmu_book3s_64_get_pteg(
 				bool second)
 {
 	u64 hash, pteg, htabsize;
-	u32 page;
+	u32 ssize;
 	hva_t r;
+	u64 vpn;
 
-	page = kvmppc_mmu_book3s_64_get_page(slbe, eaddr);
 	htabsize = ((1 << ((vcpu_book3s->sdr1 & 0x1f) + 11)) - 1);
 
-	hash = slbe->vsid ^ page;
+	vpn = kvmppc_slb_calc_vpn(slbe, eaddr);
+	ssize = slbe->tb ? MMU_SEGSIZE_1T : MMU_SEGSIZE_256M;
+	hash = hpt_hash(vpn, kvmppc_mmu_book3s_64_get_pagesize(slbe), ssize);
 	if (second)
 		hash = ~hash;
 	hash &= ((1ULL << 39ULL) - 1ULL);
@@ -146,7 +164,7 @@ static u64 kvmppc_mmu_book3s_64_get_avpn(struct kvmppc_slb *slbe, gva_t eaddr)
 	u64 avpn;
 
 	avpn = kvmppc_mmu_book3s_64_get_page(slbe, eaddr);
-	avpn |= slbe->vsid << (28 - p);
+	avpn |= slbe->vsid << (kvmppc_slb_sid_shift(slbe) - p);
 
 	if (p < 24)
 		avpn >>= ((80 - p) - 56) - 8;
@@ -189,13 +207,15 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
 	if (!slbe)
 		goto no_seg_found;
 
+	avpn = kvmppc_mmu_book3s_64_get_avpn(slbe, eaddr);
+	if (slbe->tb)
+		avpn |= SLB_VSID_B_1T;
+
 do_second:
 	ptegp = kvmppc_mmu_book3s_64_get_pteg(vcpu_book3s, slbe, eaddr, second);
 	if (kvm_is_error_hva(ptegp))
 		goto no_page_found;
 
-	avpn = kvmppc_mmu_book3s_64_get_avpn(slbe, eaddr);
-
 	if(copy_from_user(pteg, (void __user *)ptegp, sizeof(pteg))) {
 		printk(KERN_ERR "KVM can't copy data from 0x%lx!\n", ptegp);
 		goto no_page_found;
@@ -218,7 +238,7 @@ do_second:
 			continue;
 
 		/* AVPN compare */
-		if (HPTE_V_AVPN_VAL(avpn) == HPTE_V_AVPN_VAL(v)) {
+		if (HPTE_V_COMPARE(avpn, v)) {
 			u8 pp = (r & HPTE_R_PP) | key;
 			int eaddr_mask = 0xFFF;
 
@@ -324,7 +344,7 @@ static void kvmppc_mmu_book3s_64_slbmte(struct kvm_vcpu *vcpu, u64 rs, u64 rb)
 	slbe->large = (rs & SLB_VSID_L) ? 1 : 0;
 	slbe->tb    = (rs & SLB_VSID_B_1T) ? 1 : 0;
 	slbe->esid  = slbe->tb ? esid_1t : esid;
-	slbe->vsid  = rs >> 12;
+	slbe->vsid  = (rs & ~SLB_VSID_B) >> (kvmppc_slb_sid_shift(slbe) - 16);
 	slbe->valid = (rb & SLB_ESID_V) ? 1 : 0;
 	slbe->Ks    = (rs & SLB_VSID_KS) ? 1 : 0;
 	slbe->Kp    = (rs & SLB_VSID_KP) ? 1 : 0;
@@ -365,6 +385,7 @@ static u64 kvmppc_mmu_book3s_64_slbmfev(struct kvm_vcpu *vcpu, u64 slb_nr)
 static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
 {
 	struct kvmppc_slb *slbe;
+	u64 seg_size;
 
 	dprintk("KVM MMU: slbie(0x%llx)\n", ea);
 
@@ -379,7 +400,8 @@ static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
 	slbe->orige = 0;
 	slbe->origv = 0;
 
-	kvmppc_mmu_map_segment(vcpu, ea);
+	seg_size = 1ull << kvmppc_slb_sid_shift(slbe);
+	kvmppc_mmu_flush_segment(vcpu, ea & ~(seg_size - 1), seg_size);
 }
 
 static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
@@ -462,8 +484,14 @@ static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 
 	if (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
 		slb = kvmppc_mmu_book3s_64_find_slbe(vcpu, ea);
-		if (slb)
+		if (slb) {
 			gvsid = slb->vsid;
+			if (slb->tb) {
+				gvsid <<= SID_SHIFT_1T - SID_SHIFT;
+				gvsid |= esid & ((1ul << (SID_SHIFT_1T - SID_SHIFT)) - 1);
+				gvsid |= VSID_1T;
+			}
+		}
 	}
 
 	switch (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 2c6e7ee..b350d94 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -301,6 +301,23 @@ out:
 	return r;
 }
 
+void kvmppc_mmu_flush_segment(struct kvm_vcpu *vcpu, ulong ea, ulong seg_size)
+{
+	struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
+	ulong seg_mask = -seg_size;
+	int i;
+
+	for (i = 1; i < svcpu->slb_max; i++) {
+		if ((svcpu->slb[i].esid & SLB_ESID_V) &&
+		    (svcpu->slb[i].esid & seg_mask) == ea) {
+			/* Invalidate this entry */
+			svcpu->slb[i].esid = 0;
+		}
+	}
+
+	svcpu_put(svcpu);
+}
+
 void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index bdc40b8..19498a5 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1239,8 +1239,7 @@ out:
 #ifdef CONFIG_PPC64
 int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm, struct kvm_ppc_smmu_info *info)
 {
-	/* No flags */
-	info->flags = 0;
+	info->flags = KVM_PPC_1T_SEGMENTS;
 
 	/* SLB is always 64 entries */
 	info->slb_size = 64;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/5] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match
  2013-06-22  7:14 ` [PATCH 3/5] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match Paul Mackerras
@ 2013-06-22 17:42   ` Alexander Graf
  0 siblings, 0 replies; 11+ messages in thread
From: Alexander Graf @ 2013-06-22 17:42 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm


On 22.06.2013, at 09:14, Paul Mackerras wrote:

> The loop in kvmppc_mmu_book3s_64_xlate() that looks up a translation
> in the guest hashed page table (HPT) keeps going if it finds an
> HPTE that matches but doesn't allow access.  This is incorrect; it
> is different from what the hardware does, and there should never be
> more than one matching HPTE anyway.  This fixes it to stop when any
> matching HPTE is found.

IIRC I put in that logic to make it compatible with how QEMU handled HTAB lookups. Since then this has changed in QEMU though, and doing it this way is architecturally as correct as the other (the spec just says undefined behavior for duplicate entries), so I'm fine with the change.

However, does book3s_32 differ here? I would like to keep them at least behave identically for undefined behavior. So in this case, unless you know of a difference for 32bit, please also provide a patch that does this change in book3s_32_mmu.c.


Alex

> 
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> arch/powerpc/kvm/book3s_64_mmu.c | 14 ++------------
> 1 file changed, 2 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
> index b871721..2e93bb5 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu.c
> @@ -167,7 +167,6 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
> 	int i;
> 	u8 key = 0;
> 	bool found = false;
> -	bool perm_err = false;
> 	int second = 0;
> 	ulong mp_ea = vcpu->arch.magic_page_ea;
> 
> @@ -248,11 +247,6 @@ do_second:
> 				break;
> 			}
> 
> -			if (!gpte->may_read) {
> -				perm_err = true;
> -				continue;
> -			}
> -
> 			dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
> 				"-> 0x%lx\n",
> 				eaddr, avpn, gpte->vpage, gpte->raddr);
> @@ -281,6 +275,8 @@ do_second:
> 		if (pteg[i+1] != oldr)
> 			copy_to_user((void __user *)ptegp, pteg, sizeof(pteg));
> 
> +		if (!gpte->may_read)
> +			return -EPERM;
> 		return 0;
> 	} else {
> 		dprintk("KVM MMU: No PTE found (ea=0x%lx sdr1=0x%llx "
> @@ -296,13 +292,7 @@ do_second:
> 		}
> 	}
> 
> -
> no_page_found:
> -
> -
> -	if (perm_err)
> -		return -EPERM;
> -
> 	return -ENOENT;
> 
> no_seg_found:
> -- 
> 1.8.3.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly
  2013-06-22  7:15 ` [PATCH 4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly Paul Mackerras
@ 2013-06-22 17:48   ` Alexander Graf
  2013-06-22 23:30     ` Paul Mackerras
  0 siblings, 1 reply; 11+ messages in thread
From: Alexander Graf @ 2013-06-22 17:48 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm


On 22.06.2013, at 09:15, Paul Mackerras wrote:

> At present, if the guest creates a valid SLB (segment lookaside buffer)
> entry with the slbmte instruction, then invalidates it with the slbie
> instruction, then reads the entry with the slbmfee/slbmfev instructions,
> the result of the slbmfee will have the valid bit set, even though the
> entry is not actually considered valid by the host.  This is confusing,
> if not worse.  This fixes it by zeroing out the orige and origv fields
> of the SLB entry structure when the entry is invalidated.
> 
> Signed-off-by: Paul Mackerras <paulus@samba.org>

Could you please change this to only remove the V bit from orige? I've found it very useful for debugging to see old, invalidated entries in the SLB when dumping it. The spec declares anything but the toggle of the V bit as undefined.


Alex

> ---
> arch/powerpc/kvm/book3s_64_mmu.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
> index 2e93bb5..7519124 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu.c
> @@ -376,6 +376,8 @@ static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
> 	dprintk("KVM MMU: slbie(0x%llx, 0x%llx)\n", ea, slbe->esid);
> 
> 	slbe->valid = false;
> +	slbe->orige = 0;
> +	slbe->origv = 0;
> 
> 	kvmppc_mmu_map_segment(vcpu, ea);
> }
> @@ -386,8 +388,11 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
> 
> 	dprintk("KVM MMU: slbia()\n");
> 
> -	for (i = 1; i < vcpu->arch.slb_nr; i++)
> +	for (i = 1; i < vcpu->arch.slb_nr; i++) {
> 		vcpu->arch.slb[i].valid = false;
> +		vcpu->arch.slb[i].orige = 0;
> +		vcpu->arch.slb[i].origv = 0;
> +	}
> 
> 	if (vcpu->arch.shared->msr & MSR_IR) {
> 		kvmppc_mmu_flush_segments(vcpu);
> -- 
> 1.8.3.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] Some fixes and improvements for PR KVM
  2013-06-22  7:12 [PATCH 0/5] Some fixes and improvements for PR KVM Paul Mackerras
                   ` (4 preceding siblings ...)
  2013-06-22  7:16 ` [PATCH 5/5] KVM: PPC: Book3S PR: Allow guest to use 1TB segments Paul Mackerras
@ 2013-06-22 17:57 ` Alexander Graf
  5 siblings, 0 replies; 11+ messages in thread
From: Alexander Graf @ 2013-06-22 17:57 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm


On 22.06.2013, at 09:12, Paul Mackerras wrote:

> This series of 5 patches is against the KVM next branch.  It fixes
> some bugs in PR-style KVM on Book 3S PPC and adds support for the
> guest using 1TB segments as well as 256MB segments.  My ultimate goal
> is to make it possible to configure both HV and PR KVM into the same
> kernel binary, and this is just the first few steps.

Thanks, applied all except 4/5 to kvm-ppc-queue.


Alex

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly
  2013-06-22 17:48   ` Alexander Graf
@ 2013-06-22 23:30     ` Paul Mackerras
  2013-06-22 23:38       ` Alexander Graf
  0 siblings, 1 reply; 11+ messages in thread
From: Paul Mackerras @ 2013-06-22 23:30 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On Sat, Jun 22, 2013 at 07:48:05PM +0200, Alexander Graf wrote:
> 
> On 22.06.2013, at 09:15, Paul Mackerras wrote:
> 
> > At present, if the guest creates a valid SLB (segment lookaside buffer)
> > entry with the slbmte instruction, then invalidates it with the slbie
> > instruction, then reads the entry with the slbmfee/slbmfev instructions,
> > the result of the slbmfee will have the valid bit set, even though the
> > entry is not actually considered valid by the host.  This is confusing,
> > if not worse.  This fixes it by zeroing out the orige and origv fields
> > of the SLB entry structure when the entry is invalidated.
> > 
> > Signed-off-by: Paul Mackerras <paulus@samba.org>
> 
> Could you please change this to only remove the V bit from orige? I've found it very useful for debugging to see old, invalidated entries in the SLB when dumping it. The spec declares anything but the toggle of the V bit as undefined.

I did it like this since the architecture (since version 2.03)
specifies that slbmfee and slbmfev both return all zeroes for invalid
entries.  I'm not sure what you mean by your last sentence there.

Paul.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly
  2013-06-22 23:30     ` Paul Mackerras
@ 2013-06-22 23:38       ` Alexander Graf
  0 siblings, 0 replies; 11+ messages in thread
From: Alexander Graf @ 2013-06-22 23:38 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm


On 23.06.2013, at 01:30, Paul Mackerras wrote:

> On Sat, Jun 22, 2013 at 07:48:05PM +0200, Alexander Graf wrote:
>> 
>> On 22.06.2013, at 09:15, Paul Mackerras wrote:
>> 
>>> At present, if the guest creates a valid SLB (segment lookaside buffer)
>>> entry with the slbmte instruction, then invalidates it with the slbie
>>> instruction, then reads the entry with the slbmfee/slbmfev instructions,
>>> the result of the slbmfee will have the valid bit set, even though the
>>> entry is not actually considered valid by the host.  This is confusing,
>>> if not worse.  This fixes it by zeroing out the orige and origv fields
>>> of the SLB entry structure when the entry is invalidated.
>>> 
>>> Signed-off-by: Paul Mackerras <paulus@samba.org>
>> 
>> Could you please change this to only remove the V bit from orige? I've found it very useful for debugging to see old, invalidated entries in the SLB when dumping it. The spec declares anything but the toggle of the V bit as undefined.
> 
> I did it like this since the architecture (since version 2.03)
> specifies that slbmfee and slbmfev both return all zeroes for invalid
> entries.  I'm not sure what you mean by your last sentence there.

Oh, really? I based all of the work back then on 2.01, so maybe that changed passed by me unnoticed. But you're right, it's certainly explicitly mentioned in 2.06. Guess this patch is perfectly valid then :).


Thanks, applied to kvm-ppc-queue.


Alex


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-06-22 23:38 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-22  7:12 [PATCH 0/5] Some fixes and improvements for PR KVM Paul Mackerras
2013-06-22  7:13 ` [PATCH 1/5] KVM: PPC: Book3S PR: Fix proto-VSID calculations Paul Mackerras
2013-06-22  7:14 ` [PATCH 2/5] KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry Paul Mackerras
2013-06-22  7:14 ` [PATCH 3/5] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match Paul Mackerras
2013-06-22 17:42   ` Alexander Graf
2013-06-22  7:15 ` [PATCH 4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly Paul Mackerras
2013-06-22 17:48   ` Alexander Graf
2013-06-22 23:30     ` Paul Mackerras
2013-06-22 23:38       ` Alexander Graf
2013-06-22  7:16 ` [PATCH 5/5] KVM: PPC: Book3S PR: Allow guest to use 1TB segments Paul Mackerras
2013-06-22 17:57 ` [PATCH 0/5] Some fixes and improvements for PR KVM Alexander Graf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox