public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PULL 0/7] ppc patch queue 2013-06-30
@ 2013-06-30  1:38 Alexander Graf
  2013-06-30  1:38 ` [PATCH 1/7] KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL Alexander Graf
                   ` (8 more replies)
  0 siblings, 9 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:38 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list

Hi Marcelo / Gleb,

This is my current patch queue for ppc.  Please pull and apply to next, so it
makes its way into 3.11.

Changes include:

  - Book3S PR: Add support for 1TB segments
  - Book3S PR fixes

Alex


The following changes since commit 96f7edf9a54ca834f030ddf30f91750b3d737a03:

  Merge git://git.linaro.org/people/cdall/linux-kvm-arm.git tags/kvm-arm-3.11 into queue (2013-06-27 14:20:54 +0300)

are available in the git repository at:


  git://github.com/agraf/linux-2.6.git kvm-ppc-queue

for you to fetch changes up to a3ff5fbc94a829680d4aa005cd17add1c1a1fb5b:

  KVM: PPC: Ignore PIR writes (2013-06-30 03:33:22 +0200)

----------------------------------------------------------------
Alexander Graf (1):
      KVM: PPC: Ignore PIR writes

Paul Mackerras (5):
      KVM: PPC: Book3S PR: Fix proto-VSID calculations
      KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry
      KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match
      KVM: PPC: Book3S PR: Allow guest to use 1TB segments
      KVM: PPC: Book3S PR: Invalidate SLB entries properly

Tiejun Chen (1):
      KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL

 arch/powerpc/include/asm/kvm_book3s.h |  6 ++-
 arch/powerpc/kvm/book3s_64_mmu.c      | 81 ++++++++++++++++++++++-------------
 arch/powerpc/kvm/book3s_64_mmu_host.c | 21 ++++++++-
 arch/powerpc/kvm/book3s_64_slb.S      | 13 +-----
 arch/powerpc/kvm/book3s_pr.c          |  3 +-
 arch/powerpc/kvm/booke.c              |  2 +-
 arch/powerpc/kvm/emulate.c            |  3 ++
 7 files changed, 82 insertions(+), 47 deletions(-)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/7] KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
@ 2013-06-30  1:38 ` Alexander Graf
  2013-06-30  1:38 ` [PATCH 2/7] KVM: PPC: Book3S PR: Fix proto-VSID calculations Alexander Graf
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:38 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list, Tiejun Chen

From: Tiejun Chen <tiejun.chen@windriver.com>

Availablity of the doorbell_exception function is guarded by
CONFIG_PPC_DOORBELL. Use the same define to guard our caller
of it.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
[agraf: improve patch description]
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/booke.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 1020119..62d4ece 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -795,7 +795,7 @@ static void kvmppc_restart_interrupt(struct kvm_vcpu *vcpu,
 		kvmppc_fill_pt_regs(&regs);
 		timer_interrupt(&regs);
 		break;
-#if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_BOOK3E_64)
+#if defined(CONFIG_PPC_DOORBELL)
 	case BOOKE_INTERRUPT_DOORBELL:
 		kvmppc_fill_pt_regs(&regs);
 		doorbell_exception(&regs);
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/7] KVM: PPC: Book3S PR: Fix proto-VSID calculations
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
  2013-06-30  1:38 ` [PATCH 1/7] KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL Alexander Graf
@ 2013-06-30  1:38 ` Alexander Graf
  2013-06-30  1:38 ` [PATCH 3/7] KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry Alexander Graf
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:38 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list, Paul Mackerras

From: Paul Mackerras <paulus@samba.org>

This makes sure the calculation of the proto-VSIDs used by PR KVM
is done with 64-bit arithmetic.  Since vcpu3s->context_id[] is int,
when we do vcpu3s->context_id[0] << ESID_BITS the shift will be done
with 32-bit instructions, possibly leading to significant bits
getting lost, as the context id can be up to 524283 and ESID_BITS is
18.  To fix this we cast the context id to u64 before shifting.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 3a9a1ac..2c6e7ee 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -325,9 +325,9 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 		return -1;
 	vcpu3s->context_id[0] = err;
 
-	vcpu3s->proto_vsid_max = ((vcpu3s->context_id[0] + 1)
+	vcpu3s->proto_vsid_max = ((u64)(vcpu3s->context_id[0] + 1)
 				  << ESID_BITS) - 1;
-	vcpu3s->proto_vsid_first = vcpu3s->context_id[0] << ESID_BITS;
+	vcpu3s->proto_vsid_first = (u64)vcpu3s->context_id[0] << ESID_BITS;
 	vcpu3s->proto_vsid_next = vcpu3s->proto_vsid_first;
 
 	kvmppc_mmu_hpte_init(vcpu);
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/7] KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
  2013-06-30  1:38 ` [PATCH 1/7] KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL Alexander Graf
  2013-06-30  1:38 ` [PATCH 2/7] KVM: PPC: Book3S PR: Fix proto-VSID calculations Alexander Graf
@ 2013-06-30  1:38 ` Alexander Graf
  2013-06-30  1:38 ` [PATCH 4/7] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match Alexander Graf
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:38 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list, Paul Mackerras

From: Paul Mackerras <paulus@samba.org>

On entering a PR KVM guest, we invalidate the whole SLB before loading
up the guest entries.  We do this using an slbia instruction, which
invalidates all entries except entry 0, followed by an slbie to
invalidate entry 0.  However, the slbie turns out to be ineffective
in some circumstances (specifically when the host linear mapping uses
64k pages) because of errors in computing the parameter to the slbie.
The result is that the guest kernel hangs very early in boot because
it takes a DSI the first time it tries to access kernel data using
a linear mapping address in real mode.

Currently we construct bits 36 - 43 (big-endian numbering) of the slbie
parameter by taking bits 56 - 63 of the SLB VSID doubleword.  These bits
for the tlbie are C (class, 1 bit), B (segment size, 2 bits) and 5
reserved bits.  For the SLB VSID doubleword these are C (class, 1 bit),
reserved (1 bit), LP (large page size, 2 bits), and 4 reserved bits.
Thus we are not setting the B field correctly, and when LP = 01 as
it is for 64k pages, we are setting a reserved bit.

Rather than add more instructions to calculate the slbie parameter
correctly, this takes a simpler approach, which is to set entry 0 to
zeroes explicitly.  Normally slbmte should not be used to invalidate
an entry, since it doesn't invalidate the ERATs, but it is OK to use
it to invalidate an entry if it is immediately followed by slbia,
which does invalidate the ERATs.  (This has been confirmed with the
Power architects.)  This approach takes fewer instructions and will
work whatever the contents of entry 0.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_slb.S | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 56b983e..4f0caec 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -66,10 +66,6 @@ slb_exit_skip_ ## num:
 
 	ld	r12, PACA_SLBSHADOWPTR(r13)
 
-	/* Save off the first entry so we can slbie it later */
-	ld	r10, SHADOW_SLB_ESID(0)(r12)
-	ld	r11, SHADOW_SLB_VSID(0)(r12)
-
 	/* Remove bolted entries */
 	UNBOLT_SLB_ENTRY(0)
 	UNBOLT_SLB_ENTRY(1)
@@ -81,15 +77,10 @@ slb_exit_skip_ ## num:
 
 	/* Flush SLB */
 
+	li	r10, 0
+	slbmte	r10, r10
 	slbia
 
-	/* r0 = esid & ESID_MASK */
-	rldicr  r10, r10, 0, 35
-	/* r0 |= CLASS_BIT(VSID) */
-	rldic   r12, r11, 56 - 36, 36
-	or      r10, r10, r12
-	slbie	r10
-
 	/* Fill SLB with our shadow */
 
 	lbz	r12, SVCPU_SLB_MAX(r3)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/7] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
                   ` (2 preceding siblings ...)
  2013-06-30  1:38 ` [PATCH 3/7] KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry Alexander Graf
@ 2013-06-30  1:38 ` Alexander Graf
  2013-06-30  1:38 ` [PATCH 5/7] KVM: PPC: Book3S PR: Allow guest to use 1TB segments Alexander Graf
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:38 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list, Paul Mackerras

From: Paul Mackerras <paulus@samba.org>

The loop in kvmppc_mmu_book3s_64_xlate() that looks up a translation
in the guest hashed page table (HPT) keeps going if it finds an
HPTE that matches but doesn't allow access.  This is incorrect; it
is different from what the hardware does, and there should never be
more than one matching HPTE anyway.  This fixes it to stop when any
matching HPTE is found.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index b871721..2e93bb5 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -167,7 +167,6 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
 	int i;
 	u8 key = 0;
 	bool found = false;
-	bool perm_err = false;
 	int second = 0;
 	ulong mp_ea = vcpu->arch.magic_page_ea;
 
@@ -248,11 +247,6 @@ do_second:
 				break;
 			}
 
-			if (!gpte->may_read) {
-				perm_err = true;
-				continue;
-			}
-
 			dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
 				"-> 0x%lx\n",
 				eaddr, avpn, gpte->vpage, gpte->raddr);
@@ -281,6 +275,8 @@ do_second:
 		if (pteg[i+1] != oldr)
 			copy_to_user((void __user *)ptegp, pteg, sizeof(pteg));
 
+		if (!gpte->may_read)
+			return -EPERM;
 		return 0;
 	} else {
 		dprintk("KVM MMU: No PTE found (ea=0x%lx sdr1=0x%llx "
@@ -296,13 +292,7 @@ do_second:
 		}
 	}
 
-
 no_page_found:
-
-
-	if (perm_err)
-		return -EPERM;
-
 	return -ENOENT;
 
 no_seg_found:
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/7] KVM: PPC: Book3S PR: Allow guest to use 1TB segments
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
                   ` (3 preceding siblings ...)
  2013-06-30  1:38 ` [PATCH 4/7] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match Alexander Graf
@ 2013-06-30  1:38 ` Alexander Graf
  2013-06-30  1:38 ` [PATCH 6/7] KVM: PPC: Book3S PR: Invalidate SLB entries properly Alexander Graf
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:38 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list, Paul Mackerras

From: Paul Mackerras <paulus@samba.org>

With this, the guest can use 1TB segments as well as 256MB segments.
Since we now have the situation where a single emulated guest segment
could correspond to multiple shadow segments (as the shadow segments
are still 256MB segments), this adds a new kvmppc_mmu_flush_segment()
to scan for all shadow segments that need to be removed.

This restructures the guest HPT (hashed page table) lookup code to
use the correct hashing and matching functions for HPTEs within a
1TB segment.  We use the standard hpt_hash() function instead of
open-coding the hash calculation, and we use HPTE_V_COMPARE() with
an AVPN value that has the B (segment size) field included.  The
calculation of avpn is done a little earlier since it doesn't change
in the loop starting at the do_second label.

The computation in kvmppc_mmu_book3s_64_esid_to_vsid() changes so that
it returns a 256MB VSID even if the guest SLB entry is a 1TB entry.
This is because the users of this function are creating 256MB SLB
entries.  We set a new VSID_1T flag so that entries created from 1T
segments don't collide with entries from 256MB segments.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |  6 ++--
 arch/powerpc/kvm/book3s_64_mmu.c      | 60 +++++++++++++++++++++++++----------
 arch/powerpc/kvm/book3s_64_mmu_host.c | 17 ++++++++++
 arch/powerpc/kvm/book3s_pr.c          |  3 +-
 4 files changed, 66 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 349ed85..08891d0 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -107,8 +107,9 @@ struct kvmppc_vcpu_book3s {
 #define CONTEXT_GUEST		1
 #define CONTEXT_GUEST_END	2
 
-#define VSID_REAL	0x1fffffffffc00000ULL
-#define VSID_BAT	0x1fffffffffb00000ULL
+#define VSID_REAL	0x0fffffffffc00000ULL
+#define VSID_BAT	0x0fffffffffb00000ULL
+#define VSID_1T		0x1000000000000000ULL
 #define VSID_REAL_DR	0x2000000000000000ULL
 #define VSID_REAL_IR	0x4000000000000000ULL
 #define VSID_PR		0x8000000000000000ULL
@@ -123,6 +124,7 @@ extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu);
 extern void kvmppc_mmu_book3s_hv_init(struct kvm_vcpu *vcpu);
 extern int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte);
 extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr);
+extern void kvmppc_mmu_flush_segment(struct kvm_vcpu *vcpu, ulong eaddr, ulong seg_size);
 extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu);
 extern int kvmppc_book3s_hv_page_fault(struct kvm_run *run,
 			struct kvm_vcpu *vcpu, unsigned long addr,
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 2e93bb5..ee435ba 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -26,6 +26,7 @@
 #include <asm/tlbflush.h>
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
+#include <asm/mmu-hash64.h>
 
 /* #define DEBUG_MMU */
 
@@ -76,6 +77,24 @@ static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe(
 	return NULL;
 }
 
+static int kvmppc_slb_sid_shift(struct kvmppc_slb *slbe)
+{
+	return slbe->tb ? SID_SHIFT_1T : SID_SHIFT;
+}
+
+static u64 kvmppc_slb_offset_mask(struct kvmppc_slb *slbe)
+{
+	return (1ul << kvmppc_slb_sid_shift(slbe)) - 1;
+}
+
+static u64 kvmppc_slb_calc_vpn(struct kvmppc_slb *slb, gva_t eaddr)
+{
+	eaddr &= kvmppc_slb_offset_mask(slb);
+
+	return (eaddr >> VPN_SHIFT) |
+		((slb->vsid) << (kvmppc_slb_sid_shift(slb) - VPN_SHIFT));
+}
+
 static u64 kvmppc_mmu_book3s_64_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
 					 bool data)
 {
@@ -85,11 +104,7 @@ static u64 kvmppc_mmu_book3s_64_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
 	if (!slb)
 		return 0;
 
-	if (slb->tb)
-		return (((u64)eaddr >> 12) & 0xfffffff) |
-		       (((u64)slb->vsid) << 28);
-
-	return (((u64)eaddr >> 12) & 0xffff) | (((u64)slb->vsid) << 16);
+	return kvmppc_slb_calc_vpn(slb, eaddr);
 }
 
 static int kvmppc_mmu_book3s_64_get_pagesize(struct kvmppc_slb *slbe)
@@ -100,7 +115,8 @@ static int kvmppc_mmu_book3s_64_get_pagesize(struct kvmppc_slb *slbe)
 static u32 kvmppc_mmu_book3s_64_get_page(struct kvmppc_slb *slbe, gva_t eaddr)
 {
 	int p = kvmppc_mmu_book3s_64_get_pagesize(slbe);
-	return ((eaddr & 0xfffffff) >> p);
+
+	return ((eaddr & kvmppc_slb_offset_mask(slbe)) >> p);
 }
 
 static hva_t kvmppc_mmu_book3s_64_get_pteg(
@@ -109,13 +125,15 @@ static hva_t kvmppc_mmu_book3s_64_get_pteg(
 				bool second)
 {
 	u64 hash, pteg, htabsize;
-	u32 page;
+	u32 ssize;
 	hva_t r;
+	u64 vpn;
 
-	page = kvmppc_mmu_book3s_64_get_page(slbe, eaddr);
 	htabsize = ((1 << ((vcpu_book3s->sdr1 & 0x1f) + 11)) - 1);
 
-	hash = slbe->vsid ^ page;
+	vpn = kvmppc_slb_calc_vpn(slbe, eaddr);
+	ssize = slbe->tb ? MMU_SEGSIZE_1T : MMU_SEGSIZE_256M;
+	hash = hpt_hash(vpn, kvmppc_mmu_book3s_64_get_pagesize(slbe), ssize);
 	if (second)
 		hash = ~hash;
 	hash &= ((1ULL << 39ULL) - 1ULL);
@@ -146,7 +164,7 @@ static u64 kvmppc_mmu_book3s_64_get_avpn(struct kvmppc_slb *slbe, gva_t eaddr)
 	u64 avpn;
 
 	avpn = kvmppc_mmu_book3s_64_get_page(slbe, eaddr);
-	avpn |= slbe->vsid << (28 - p);
+	avpn |= slbe->vsid << (kvmppc_slb_sid_shift(slbe) - p);
 
 	if (p < 24)
 		avpn >>= ((80 - p) - 56) - 8;
@@ -189,13 +207,15 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
 	if (!slbe)
 		goto no_seg_found;
 
+	avpn = kvmppc_mmu_book3s_64_get_avpn(slbe, eaddr);
+	if (slbe->tb)
+		avpn |= SLB_VSID_B_1T;
+
 do_second:
 	ptegp = kvmppc_mmu_book3s_64_get_pteg(vcpu_book3s, slbe, eaddr, second);
 	if (kvm_is_error_hva(ptegp))
 		goto no_page_found;
 
-	avpn = kvmppc_mmu_book3s_64_get_avpn(slbe, eaddr);
-
 	if(copy_from_user(pteg, (void __user *)ptegp, sizeof(pteg))) {
 		printk(KERN_ERR "KVM can't copy data from 0x%lx!\n", ptegp);
 		goto no_page_found;
@@ -218,7 +238,7 @@ do_second:
 			continue;
 
 		/* AVPN compare */
-		if (HPTE_V_AVPN_VAL(avpn) == HPTE_V_AVPN_VAL(v)) {
+		if (HPTE_V_COMPARE(avpn, v)) {
 			u8 pp = (r & HPTE_R_PP) | key;
 			int eaddr_mask = 0xFFF;
 
@@ -324,7 +344,7 @@ static void kvmppc_mmu_book3s_64_slbmte(struct kvm_vcpu *vcpu, u64 rs, u64 rb)
 	slbe->large = (rs & SLB_VSID_L) ? 1 : 0;
 	slbe->tb    = (rs & SLB_VSID_B_1T) ? 1 : 0;
 	slbe->esid  = slbe->tb ? esid_1t : esid;
-	slbe->vsid  = rs >> 12;
+	slbe->vsid  = (rs & ~SLB_VSID_B) >> (kvmppc_slb_sid_shift(slbe) - 16);
 	slbe->valid = (rb & SLB_ESID_V) ? 1 : 0;
 	slbe->Ks    = (rs & SLB_VSID_KS) ? 1 : 0;
 	slbe->Kp    = (rs & SLB_VSID_KP) ? 1 : 0;
@@ -365,6 +385,7 @@ static u64 kvmppc_mmu_book3s_64_slbmfev(struct kvm_vcpu *vcpu, u64 slb_nr)
 static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
 {
 	struct kvmppc_slb *slbe;
+	u64 seg_size;
 
 	dprintk("KVM MMU: slbie(0x%llx)\n", ea);
 
@@ -377,7 +398,8 @@ static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
 
 	slbe->valid = false;
 
-	kvmppc_mmu_map_segment(vcpu, ea);
+	seg_size = 1ull << kvmppc_slb_sid_shift(slbe);
+	kvmppc_mmu_flush_segment(vcpu, ea & ~(seg_size - 1), seg_size);
 }
 
 static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
@@ -457,8 +479,14 @@ static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 
 	if (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
 		slb = kvmppc_mmu_book3s_64_find_slbe(vcpu, ea);
-		if (slb)
+		if (slb) {
 			gvsid = slb->vsid;
+			if (slb->tb) {
+				gvsid <<= SID_SHIFT_1T - SID_SHIFT;
+				gvsid |= esid & ((1ul << (SID_SHIFT_1T - SID_SHIFT)) - 1);
+				gvsid |= VSID_1T;
+			}
+		}
 	}
 
 	switch (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 2c6e7ee..b350d94 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -301,6 +301,23 @@ out:
 	return r;
 }
 
+void kvmppc_mmu_flush_segment(struct kvm_vcpu *vcpu, ulong ea, ulong seg_size)
+{
+	struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
+	ulong seg_mask = -seg_size;
+	int i;
+
+	for (i = 1; i < svcpu->slb_max; i++) {
+		if ((svcpu->slb[i].esid & SLB_ESID_V) &&
+		    (svcpu->slb[i].esid & seg_mask) == ea) {
+			/* Invalidate this entry */
+			svcpu->slb[i].esid = 0;
+		}
+	}
+
+	svcpu_put(svcpu);
+}
+
 void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index bdc40b8..19498a5 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1239,8 +1239,7 @@ out:
 #ifdef CONFIG_PPC64
 int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm, struct kvm_ppc_smmu_info *info)
 {
-	/* No flags */
-	info->flags = 0;
+	info->flags = KVM_PPC_1T_SEGMENTS;
 
 	/* SLB is always 64 entries */
 	info->slb_size = 64;
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/7] KVM: PPC: Book3S PR: Invalidate SLB entries properly
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
                   ` (4 preceding siblings ...)
  2013-06-30  1:38 ` [PATCH 5/7] KVM: PPC: Book3S PR: Allow guest to use 1TB segments Alexander Graf
@ 2013-06-30  1:38 ` Alexander Graf
  2013-06-30  1:38 ` [PATCH 7/7] KVM: PPC: Ignore PIR writes Alexander Graf
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:38 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list, Paul Mackerras

From: Paul Mackerras <paulus@samba.org>

At present, if the guest creates a valid SLB (segment lookaside buffer)
entry with the slbmte instruction, then invalidates it with the slbie
instruction, then reads the entry with the slbmfee/slbmfev instructions,
the result of the slbmfee will have the valid bit set, even though the
entry is not actually considered valid by the host.  This is confusing,
if not worse.  This fixes it by zeroing out the orige and origv fields
of the SLB entry structure when the entry is invalidated.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index ee435ba..739bfba 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -397,6 +397,8 @@ static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
 	dprintk("KVM MMU: slbie(0x%llx, 0x%llx)\n", ea, slbe->esid);
 
 	slbe->valid = false;
+	slbe->orige = 0;
+	slbe->origv = 0;
 
 	seg_size = 1ull << kvmppc_slb_sid_shift(slbe);
 	kvmppc_mmu_flush_segment(vcpu, ea & ~(seg_size - 1), seg_size);
@@ -408,8 +410,11 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
 
 	dprintk("KVM MMU: slbia()\n");
 
-	for (i = 1; i < vcpu->arch.slb_nr; i++)
+	for (i = 1; i < vcpu->arch.slb_nr; i++) {
 		vcpu->arch.slb[i].valid = false;
+		vcpu->arch.slb[i].orige = 0;
+		vcpu->arch.slb[i].origv = 0;
+	}
 
 	if (vcpu->arch.shared->msr & MSR_IR) {
 		kvmppc_mmu_flush_segments(vcpu);
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/7] KVM: PPC: Ignore PIR writes
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
                   ` (5 preceding siblings ...)
  2013-06-30  1:38 ` [PATCH 6/7] KVM: PPC: Book3S PR: Invalidate SLB entries properly Alexander Graf
@ 2013-06-30  1:38 ` Alexander Graf
  2013-06-30  1:59 ` [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
  2013-07-02  7:47 ` Paolo Bonzini
  8 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:38 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list

While technically it's legal to write to PIR and have the identifier changed,
we don't implement logic to do so because we simply expose vcpu_id to the guest.

So instead, let's ignore writes to PIR. This ensures that we don't inject faults
into the guest for something the guest is allowed to do. While at it, we cross
our fingers hoping that it also doesn't mind that we broke its PIR read values.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/emulate.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 631a265..2c52ada 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -169,6 +169,9 @@ static int kvmppc_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 		vcpu->arch.shared->sprg3 = spr_val;
 		break;
 
+	/* PIR can legally be written, but we ignore it */
+	case SPRN_PIR: break;
+
 	default:
 		emulated = kvmppc_core_emulate_mtspr(vcpu, sprn,
 						     spr_val);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PULL 0/7] ppc patch queue 2013-06-30
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
                   ` (6 preceding siblings ...)
  2013-06-30  1:38 ` [PATCH 7/7] KVM: PPC: Ignore PIR writes Alexander Graf
@ 2013-06-30  1:59 ` Alexander Graf
  2013-07-02  7:47 ` Paolo Bonzini
  8 siblings, 0 replies; 10+ messages in thread
From: Alexander Graf @ 2013-06-30  1:59 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm@vger.kernel.org mailing list, Paolo Bonzini, Gleb Natapov


On 30.06.2013, at 03:38, Alexander Graf wrote:

> Hi Marcelo / Gleb,

Paolo / Gleb of course. I've changed my script now :).


Alex

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PULL 0/7] ppc patch queue 2013-06-30
  2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
                   ` (7 preceding siblings ...)
  2013-06-30  1:59 ` [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
@ 2013-07-02  7:47 ` Paolo Bonzini
  8 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2013-07-02  7:47 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm@vger.kernel.org mailing list

Il 30/06/2013 03:38, Alexander Graf ha scritto:
> Hi Marcelo / Gleb,
> 
> This is my current patch queue for ppc.  Please pull and apply to next, so it
> makes its way into 3.11.
> 
> Changes include:
> 
>   - Book3S PR: Add support for 1TB segments
>   - Book3S PR fixes
> 
> Alex
> 
> 
> The following changes since commit 96f7edf9a54ca834f030ddf30f91750b3d737a03:
> 
>   Merge git://git.linaro.org/people/cdall/linux-kvm-arm.git tags/kvm-arm-3.11 into queue (2013-06-27 14:20:54 +0300)
> 
> are available in the git repository at:
> 
> 
>   git://github.com/agraf/linux-2.6.git kvm-ppc-queue
> 
> for you to fetch changes up to a3ff5fbc94a829680d4aa005cd17add1c1a1fb5b:
> 
>   KVM: PPC: Ignore PIR writes (2013-06-30 03:33:22 +0200)
> 
> ----------------------------------------------------------------
> Alexander Graf (1):
>       KVM: PPC: Ignore PIR writes
> 
> Paul Mackerras (5):
>       KVM: PPC: Book3S PR: Fix proto-VSID calculations
>       KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry
>       KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match
>       KVM: PPC: Book3S PR: Allow guest to use 1TB segments
>       KVM: PPC: Book3S PR: Invalidate SLB entries properly
> 
> Tiejun Chen (1):
>       KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL
> 
>  arch/powerpc/include/asm/kvm_book3s.h |  6 ++-
>  arch/powerpc/kvm/book3s_64_mmu.c      | 81 ++++++++++++++++++++++-------------
>  arch/powerpc/kvm/book3s_64_mmu_host.c | 21 ++++++++-
>  arch/powerpc/kvm/book3s_64_slb.S      | 13 +-----
>  arch/powerpc/kvm/book3s_pr.c          |  3 +-
>  arch/powerpc/kvm/booke.c              |  2 +-
>  arch/powerpc/kvm/emulate.c            |  3 ++
>  7 files changed, 82 insertions(+), 47 deletions(-)
> 

Pulled, thanks!

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-07-02  7:47 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-30  1:38 [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
2013-06-30  1:38 ` [PATCH 1/7] KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL Alexander Graf
2013-06-30  1:38 ` [PATCH 2/7] KVM: PPC: Book3S PR: Fix proto-VSID calculations Alexander Graf
2013-06-30  1:38 ` [PATCH 3/7] KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry Alexander Graf
2013-06-30  1:38 ` [PATCH 4/7] KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match Alexander Graf
2013-06-30  1:38 ` [PATCH 5/7] KVM: PPC: Book3S PR: Allow guest to use 1TB segments Alexander Graf
2013-06-30  1:38 ` [PATCH 6/7] KVM: PPC: Book3S PR: Invalidate SLB entries properly Alexander Graf
2013-06-30  1:38 ` [PATCH 7/7] KVM: PPC: Ignore PIR writes Alexander Graf
2013-06-30  1:59 ` [PULL 0/7] ppc patch queue 2013-06-30 Alexander Graf
2013-07-02  7:47 ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox