From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Graf Subject: Re: [PATCH 3/3] KVM: PPC: Book3S: Make kvmppc_ld return a more accurate error indication Date: Mon, 28 Jul 2014 14:26:01 +0200 Message-ID: <53D64159.8050303@suse.de> References: <1405756776-7634-1-git-send-email-paulus@samba.org> <1405756776-7634-4-git-send-email-paulus@samba.org> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit To: Paul Mackerras , kvm@vger.kernel.org, kvm-ppc@vger.kernel.org Return-path: In-Reply-To: <1405756776-7634-4-git-send-email-paulus@samba.org> Sender: kvm-ppc-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 19.07.14 09:59, Paul Mackerras wrote: > At present, kvmppc_ld calls kvmppc_xlate, and if kvmppc_xlate returns > any error indication, it returns -ENOENT, which is taken to mean an > HPTE not found error. However, the error could have been a segment > found (no SLB entry) or a permission error. Similarly, > kvmppc_pte_to_hva currently does permission checking, but any error > from it is taken by kvmppc_ld to mean that the access is an emulated > MMIO access. Also, kvmppc_ld does no execute permission checking. > > This fixes these problems by (a) returning any error from kvmppc_xlate > directly, (b) moving the permission check from kvmppc_pte_to_hva > into kvmppc_ld, and (c) adding an execute permission check to kvmppc_ld. > > This is similar to what was done for kvmppc_st() by commit 82ff911317c3 > ("KVM: PPC: Deflect page write faults properly in kvmppc_st"). > > Signed-off-by: Paul Mackerras > --- > arch/powerpc/kvm/book3s.c | 25 ++++++++++++------------- > 1 file changed, 12 insertions(+), 13 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c > index 31facfc..087f8f9 100644 > --- a/arch/powerpc/kvm/book3s.c > +++ b/arch/powerpc/kvm/book3s.c > @@ -413,17 +413,10 @@ static hva_t kvmppc_bad_hva(void) > return PAGE_OFFSET; > } > > -static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte, > - bool read) > +static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte) > { > hva_t hpage; > > - if (read && !pte->may_read) > - goto err; > - > - if (!read && !pte->may_write) > - goto err; > - > hpage = gfn_to_hva(vcpu->kvm, pte->raddr >> PAGE_SHIFT); > if (kvm_is_error_hva(hpage)) > goto err; > @@ -462,15 +455,23 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, > { > struct kvmppc_pte pte; > hva_t hva = *eaddr; > + int rc; > > vcpu->stat.ld++; > > - if (kvmppc_xlate(vcpu, *eaddr, data, false, &pte)) > - goto nopte; > + rc = kvmppc_xlate(vcpu, *eaddr, data, false, &pte); > + if (rc) > + return rc; > > *eaddr = pte.raddr; > > - hva = kvmppc_pte_to_hva(vcpu, &pte, true); > + if (!pte.may_read) > + return -EPERM; > + > + if (!data && !pte.may_execute) > + return -ENOEXEC; We should probably do a full audit of all that code and decide who is responsible for returning errors where. IIRC our MMU frontends already check pte.may* and return error codes respectively. However, double-checking doesn't hurt for now, so I've applied this patch regardless. Alex