From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Aneesh Kumar K.V" Subject: Re: [RFC PATCH] KVM: PPC: Book3S: MMIO emulation support for little endian guests Date: Fri, 04 Oct 2013 19:18:44 +0530 Message-ID: <87ob75kuyr.fsf@linux.vnet.ibm.com> References: <1380798224-27024-1-git-send-email-clg@fr.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org, =?utf-8?Q?C=C3=A9dric?= Le Goater To: =?utf-8?Q?C=C3=A9dric?= Le Goater , agraf@suse.de, paulus@samba.org Return-path: Received: from e28smtp04.in.ibm.com ([122.248.162.4]:35769 "EHLO e28smtp04.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753755Ab3JDNsy convert rfc822-to-8bit (ORCPT ); Fri, 4 Oct 2013 09:48:54 -0400 Received: from /spool/local by e28smtp04.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 4 Oct 2013 19:18:51 +0530 In-Reply-To: <1380798224-27024-1-git-send-email-clg@fr.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: C=C3=A9dric Le Goater writes: > MMIO emulation reads the last instruction executed by the guest > and then emulates. If the guest is running in Little Endian mode, > the instruction needs to be byte-swapped before being emulated. > > This patch stores the last instruction in the endian order of the > host, primarily doing a byte-swap if needed. The common code > which fetches last_inst uses a helper routine kvmppc_need_byteswap(). > and the exit paths for the Book3S PV and HR guests use their own > version in assembly. > > kvmppc_emulate_instruction() also uses kvmppc_need_byteswap() to > define in which endian order the mmio needs to be done. > > The patch is based on Alex Graf's kvm-ppc-queue branch and it > has been tested on Big Endian and Little Endian HV guests and > Big Endian PR guests. > > Signed-off-by: C=C3=A9dric Le Goater > --- > > Here are some comments/questions :=20 > > * the host is assumed to be running in Big Endian. when Little Endia= n > hosts are supported in the future, we will use the cpu features to > fix kvmppc_need_byteswap() > > * the 'is_bigendian' parameter of the routines kvmppc_handle_load() > and kvmppc_handle_store() seems redundant but the *BRX opcodes=20 > make the improvements unclear. We could eventually rename the > parameter to byteswap and the attribute vcpu->arch.mmio_is_bigendi= an > to vcpu->arch.mmio_need_byteswap. Anyhow, the current naming sucks > and I would happy to have some directions to fix it. > > > > arch/powerpc/include/asm/kvm_book3s.h | 15 ++++++- > arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 ++ > arch/powerpc/kvm/book3s_hv_rmhandlers.S | 14 +++++- > arch/powerpc/kvm/book3s_segment.S | 14 +++++- > arch/powerpc/kvm/emulate.c | 71 +++++++++++++++++----= ---------- > 5 files changed, 83 insertions(+), 35 deletions(-) > > diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/inc= lude/asm/kvm_book3s.h > index 0ec00f4..36c5573 100644 > --- a/arch/powerpc/include/asm/kvm_book3s.h > +++ b/arch/powerpc/include/asm/kvm_book3s.h > @@ -270,14 +270,22 @@ static inline ulong kvmppc_get_pc(struct kvm_vc= pu *vcpu) > return vcpu->arch.pc; > } > =20 > +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu) > +{ > + return vcpu->arch.shared->msr & MSR_LE; > +} > + May be kvmppc_need_instbyteswap ?, because for data it also depend on SLE bit ? Don't also need to check the host platform endianness here ? ie, if host os __BIG_ENDIAN__ ? > static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu) > { > ulong pc =3D kvmppc_get_pc(vcpu); > =20 > /* Load the instruction manually if it failed to do so in the > * exit path */ > - if (vcpu->arch.last_inst =3D=3D KVM_INST_FETCH_FAILED) > + if (vcpu->arch.last_inst =3D=3D KVM_INST_FETCH_FAILED) { > kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false); > + if (kvmppc_need_byteswap(vcpu)) > + vcpu->arch.last_inst =3D swab32(vcpu->arch.last_inst); > + } > =20 > return vcpu->arch.last_inst; > } > @@ -293,8 +301,11 @@ static inline u32 kvmppc_get_last_sc(struct kvm_= vcpu *vcpu) > =20 > /* Load the instruction manually if it failed to do so in the > * exit path */ > - if (vcpu->arch.last_inst =3D=3D KVM_INST_FETCH_FAILED) > + if (vcpu->arch.last_inst =3D=3D KVM_INST_FETCH_FAILED) { > kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false); > + if (kvmppc_need_byteswap(vcpu)) > + vcpu->arch.last_inst =3D swab32(vcpu->arch.last_inst); > + } > =20 > return vcpu->arch.last_inst; > } > diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/b= ook3s_64_mmu_hv.c > index 3a89b85..28130c7 100644 > --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c > +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c > @@ -547,6 +547,10 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run= *run, struct kvm_vcpu *vcpu, > ret =3D kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false); > if (ret !=3D EMULATE_DONE || last_inst =3D=3D KVM_INST_FETCH_FAILE= D) > return RESUME_GUEST; > + > + if (kvmppc_need_byteswap(vcpu)) > + last_inst =3D swab32(last_inst); > + > vcpu->arch.last_inst =3D last_inst; > } > =20 > diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/k= vm/book3s_hv_rmhandlers.S > index dd80953..1d3ee40 100644 > --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S > +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S > @@ -1393,14 +1393,26 @@ fast_interrupt_c_return: > lwz r8, 0(r10) > mtmsrd r3 > =20 > + ld r0, VCPU_MSR(r9) > + > + /* r10 =3D vcpu->arch.msr & MSR_LE */ > + rldicl r10, r0, 0, 63 > + cmpdi r10, 0 > + bne 2f > + > /* Store the result */ > stw r8, VCPU_LAST_INST(r9) > =20 > /* Unset guest mode. */ > - li r0, KVM_GUEST_MODE_NONE > +1: li r0, KVM_GUEST_MODE_NONE > stb r0, HSTATE_IN_GUEST(r13) > b guest_exit_cont > =20 > + /* Swap and store the result */ > +2: addi r11, r9, VCPU_LAST_INST > + stwbrx r8, 0, r11 > + b 1b > + > /* > * Similarly for an HISI, reflect it to the guest as an ISI unless > * it is an HPTE not found fault for a page that we have paged out. > diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/boo= k3s_segment.S > index 1abe478..bf20b45 100644 > --- a/arch/powerpc/kvm/book3s_segment.S > +++ b/arch/powerpc/kvm/book3s_segment.S > @@ -287,7 +287,19 @@ ld_last_inst: > sync > =20 > #endif > - stw r0, SVCPU_LAST_INST(r13) > + ld r8, SVCPU_SHADOW_SRR1(r13) > + > + /* r10 =3D vcpu->arch.msr & MSR_LE */ > + rldicl r10, r0, 0, 63 that should be ? rldicl r10, r8, 0, 63 -aneesh