From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Aneesh Kumar K.V" Subject: [PATCH 2/6] KVM: PPC: BOOK3S: HV: Deny virtual page class key update via h_protect Date: Sun, 29 Jun 2014 16:47:31 +0530 Message-ID: <1404040655-12076-4-git-send-email-aneesh.kumar@linux.vnet.ibm.com> References: <1404040655-12076-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> Cc: linuxppc-dev@lists.ozlabs.org, kvm-ppc@vger.kernel.org, kvm@vger.kernel.org, "Aneesh Kumar K.V" To: agraf@suse.de, benh@kernel.crashing.org, paulus@samba.org Return-path: Received: from e23smtp08.au.ibm.com ([202.81.31.141]:54924 "EHLO e23smtp08.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751162AbaF2LSG (ORCPT ); Sun, 29 Jun 2014 07:18:06 -0400 Received: from /spool/local by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 29 Jun 2014 21:18:03 +1000 In-Reply-To: <1404040655-12076-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: This makes it consistent with h_enter where we clear the key bits. We also want to use virtual page class key protection mechanism for indicating host page fault. For that we will be using key class index 30 and 31. So prevent the guest from updating key bits until we add proper support for virtual page class protection mechanism for the guest. This will not have any impact for PAPR linux guest because Linux guest currently don't use virtual page class key protection model Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/kvm/book3s_hv_rm_mmu.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index 157a5f35edfa..f908845f7379 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -658,13 +658,17 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, } v = pte; + /* + * We ignore key bits here. We use class 31 and 30 for + * hypervisor purpose. We still don't track the page + * class seperately. Until then don't allow h_protect + * to change key bits. + */ bits = (flags << 55) & HPTE_R_PP0; - bits |= (flags << 48) & HPTE_R_KEY_HI; - bits |= flags & (HPTE_R_PP | HPTE_R_N | HPTE_R_KEY_LO); + bits |= flags & (HPTE_R_PP | HPTE_R_N); /* Update guest view of 2nd HPTE dword */ - mask = HPTE_R_PP0 | HPTE_R_PP | HPTE_R_N | - HPTE_R_KEY_HI | HPTE_R_KEY_LO; + mask = HPTE_R_PP0 | HPTE_R_PP | HPTE_R_N; rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]); if (rev) { r = (rev->guest_rpte & ~mask) | bits; -- 1.9.1