* [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr()
@ 2018-05-21 5:24 wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 1/7] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation wei.guo.simon
` (6 more replies)
0 siblings, 7 replies; 10+ messages in thread
From: wei.guo.simon @ 2018-05-21 5:24 UTC (permalink / raw)
To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo
From: Simon Guo <wei.guo.simon@gmail.com>
We already have analyse_instr() which analyzes instructions for the instruction
type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
duplicated and it will be good to utilize analyse_instr() to reimplement the
code. The advantage is that the code logic will be shared and more clean to be
maintained.
This patch series reimplement kvmppc_emulate_loadstore() for various load/store
instructions.
The testcase locates at:
https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c
- Tested at both PR/HV KVM.
- Also tested with little endian host & big endian guest.
Tested instruction list:
lbz lbzu lbzx ld ldbrx
ldu ldx lfd lfdu lfdx
lfiwax lfiwzx lfs lfsu lfsx
lha lhau lhax lhbrx lhz
lhzu lhzx lvx lwax lwbrx
lwz lwzu lwzx lxsdx lxsiwax
lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
stb stbu stbx std stdbrx
stdu stdx stfd stfdu stfdx
stfiwx stfs stfsx sth sthbrx
sthu sthx stvx stw stwbrx
stwu stwx stxsdx stxsiwx stxsspx
stxvd2x stxvw4x
lvebx stvebx
lvehx stvehx
lvewx stvewx
V2 -> V3 changes:
1) add FPU SIGNEXT case to handle lfiwax based on comment.
2) a minor change to go label "out:" when EMULATE_DO_MMIO
returned based on comment.
3) rebased to paul's kvm-ppc-next branch (3 patches was
merged into that branch and so this patch set includes 7
patches only).
V1 -> V2 changes:
1) correct patch split issue in v1.
2) revise some commit message/code comment per review comment
3) remove incorrect special handling for stxsiwx
4) remove mmio_update_ra related and move the RA update into
kvmppc_emulate_loadstore().
5) rework giveup_ext() which is only meaningful when not NULL.
6) rewrite VMX emulation code and cover rest VMX instructions:
lvebx stvebx
lvehx stvehx
lvewx stvewx
Simon Guo (7):
KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio
emulation
KVM: PPC: reimplement non-SIMD LOAD/STORE instruction mmio emulation
with analyse_intr() input
KVM: PPC: add giveup_ext() hook for PPC KVM ops
KVM: PPC: reimplement LOAD_FP/STORE_FP instruction mmio emulation with
analyse_intr() input
KVM: PPC: reimplements LOAD_VSX/STORE_VSX instruction mmio emulation
with analyse_intr() input
KVM: PPC: expand mmio_vsx_copy_type to mmio_copy_type to cover VMX
load/store elem types
KVM: PPC: reimplements LOAD_VMX/STORE_VMX instruction mmio emulation
with analyse_intr() input
arch/powerpc/include/asm/kvm_host.h | 11 +-
arch/powerpc/include/asm/kvm_ppc.h | 17 +-
arch/powerpc/kvm/book3s.c | 4 +-
arch/powerpc/kvm/book3s_pr.c | 1 +
arch/powerpc/kvm/e500_mmu_host.c | 8 +-
arch/powerpc/kvm/emulate_loadstore.c | 751 +++++++++++------------------------
arch/powerpc/kvm/powerpc.c | 299 +++++++++++---
7 files changed, 498 insertions(+), 593 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v3 1/7] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation
2018-05-21 5:24 [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr() wei.guo.simon
@ 2018-05-21 5:24 ` wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 2/7] KVM: PPC: reimplement non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input wei.guo.simon
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: wei.guo.simon @ 2018-05-21 5:24 UTC (permalink / raw)
To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo
From: Simon Guo <wei.guo.simon@gmail.com>
Some VSX instruction like lxvwsx will splat word into VSR. This patch
adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this.
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/kvm/powerpc.c | 23 +++++++++++++++++++++++
2 files changed, 24 insertions(+)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 89f44ec..4bade29 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -453,6 +453,7 @@ struct mmio_hpte_cache {
#define KVMPPC_VSX_COPY_WORD 1
#define KVMPPC_VSX_COPY_DWORD 2
#define KVMPPC_VSX_COPY_DWORD_LOAD_DUMP 3
+#define KVMPPC_VSX_COPY_WORD_LOAD_DUMP 4
struct openpic;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index bef27b1..45daf3b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -907,6 +907,26 @@ static inline void kvmppc_set_vsr_dword_dump(struct kvm_vcpu *vcpu,
}
}
+static inline void kvmppc_set_vsr_word_dump(struct kvm_vcpu *vcpu,
+ u32 gpr)
+{
+ union kvmppc_one_reg val;
+ int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK;
+
+ if (vcpu->arch.mmio_vsx_tx_sx_enabled) {
+ val.vsx32val[0] = gpr;
+ val.vsx32val[1] = gpr;
+ val.vsx32val[2] = gpr;
+ val.vsx32val[3] = gpr;
+ VCPU_VSX_VR(vcpu, index) = val.vval;
+ } else {
+ val.vsx32val[0] = gpr;
+ val.vsx32val[1] = gpr;
+ VCPU_VSX_FPR(vcpu, index, 0) = val.vsxval[0];
+ VCPU_VSX_FPR(vcpu, index, 1) = val.vsxval[0];
+ }
+}
+
static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu,
u32 gpr32)
{
@@ -1061,6 +1081,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
else if (vcpu->arch.mmio_vsx_copy_type ==
KVMPPC_VSX_COPY_DWORD_LOAD_DUMP)
kvmppc_set_vsr_dword_dump(vcpu, gpr);
+ else if (vcpu->arch.mmio_vsx_copy_type ==
+ KVMPPC_VSX_COPY_WORD_LOAD_DUMP)
+ kvmppc_set_vsr_word_dump(vcpu, gpr);
break;
#endif
#ifdef CONFIG_ALTIVEC
--
1.8.3.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 2/7] KVM: PPC: reimplement non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input
2018-05-21 5:24 [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr() wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 1/7] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation wei.guo.simon
@ 2018-05-21 5:24 ` wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 3/7] KVM: PPC: add giveup_ext() hook for PPC KVM ops wei.guo.simon
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: wei.guo.simon @ 2018-05-21 5:24 UTC (permalink / raw)
To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo
From: Simon Guo <wei.guo.simon@gmail.com>
This patch reimplements non-SIMD LOAD/STORE instruction MMIO emulation
with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
properties exported by analyse_instr() and invokes
kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.
It also move CACHEOP type handling into the skeleton.
instruction_type within kvm_ppc.h is renamed to avoid conflict with
sstep.h.
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
arch/powerpc/include/asm/kvm_ppc.h | 6 +-
arch/powerpc/kvm/book3s.c | 4 +-
arch/powerpc/kvm/e500_mmu_host.c | 8 +-
arch/powerpc/kvm/emulate_loadstore.c | 271 ++++++-----------------------------
4 files changed, 51 insertions(+), 238 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index abe7032..139cdf0 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -52,7 +52,7 @@ enum emulation_result {
EMULATE_EXIT_USER, /* emulation requires exit to user-space */
};
-enum instruction_type {
+enum instruction_fetch_type {
INST_GENERIC,
INST_SC, /* system call */
};
@@ -93,7 +93,7 @@ extern int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
int is_default_endian);
extern int kvmppc_load_last_inst(struct kvm_vcpu *vcpu,
- enum instruction_type type, u32 *inst);
+ enum instruction_fetch_type type, u32 *inst);
extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
bool data);
@@ -330,7 +330,7 @@ struct kvmppc_ops {
extern struct kvmppc_ops *kvmppc_pr_ops;
static inline int kvmppc_get_last_inst(struct kvm_vcpu *vcpu,
- enum instruction_type type, u32 *inst)
+ enum instruction_fetch_type type, u32 *inst)
{
int ret = EMULATE_DONE;
u32 fetched_inst;
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 97d4a11..320cdcf 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -450,8 +450,8 @@ int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, enum xlate_instdata xlid,
return r;
}
-int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type,
- u32 *inst)
+int kvmppc_load_last_inst(struct kvm_vcpu *vcpu,
+ enum instruction_fetch_type type, u32 *inst)
{
ulong pc = kvmppc_get_pc(vcpu);
int r;
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index c878b4f..8f2985e 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -625,8 +625,8 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr,
}
#ifdef CONFIG_KVM_BOOKE_HV
-int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type,
- u32 *instr)
+int kvmppc_load_last_inst(struct kvm_vcpu *vcpu,
+ enum instruction_fetch_type type, u32 *instr)
{
gva_t geaddr;
hpa_t addr;
@@ -715,8 +715,8 @@ int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type,
return EMULATE_DONE;
}
#else
-int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type,
- u32 *instr)
+int kvmppc_load_last_inst(struct kvm_vcpu *vcpu,
+ enum instruction_fetch_type type, u32 *instr)
{
return EMULATE_AGAIN;
}
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index b8a3aef..af7c71a 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -31,6 +31,7 @@
#include <asm/kvm_ppc.h>
#include <asm/disassemble.h>
#include <asm/ppc-opcode.h>
+#include <asm/sstep.h>
#include "timing.h"
#include "trace.h"
@@ -84,8 +85,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
struct kvm_run *run = vcpu->run;
u32 inst;
int ra, rs, rt;
- enum emulation_result emulated;
+ enum emulation_result emulated = EMULATE_FAIL;
int advance = 1;
+ struct instruction_op op;
/* this default type might be overwritten by subcategories */
kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
@@ -113,144 +115,61 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
vcpu->arch.mmio_vmx_copy_nums = 0;
vcpu->arch.mmio_host_swabbed = 0;
- switch (get_op(inst)) {
- case 31:
- switch (get_xop(inst)) {
- case OP_31_XOP_LWZX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
- break;
-
- case OP_31_XOP_LWZUX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_LBZX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
- break;
-
- case OP_31_XOP_LBZUX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_STDX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 8, 1);
- break;
-
- case OP_31_XOP_STDUX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 8, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
+ emulated = EMULATE_FAIL;
+ vcpu->arch.regs.msr = vcpu->arch.shared->msr;
+ vcpu->arch.regs.ccr = vcpu->arch.cr;
+ if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) {
+ int type = op.type & INSTR_TYPE_MASK;
+ int size = GETSIZE(op.type);
- case OP_31_XOP_STWX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 4, 1);
- break;
+ switch (type) {
+ case LOAD: {
+ int instr_byte_swap = op.type & BYTEREV;
- case OP_31_XOP_STWUX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
+ if (op.type & SIGNEXT)
+ emulated = kvmppc_handle_loads(run, vcpu,
+ op.reg, size, !instr_byte_swap);
+ else
+ emulated = kvmppc_handle_load(run, vcpu,
+ op.reg, size, !instr_byte_swap);
- case OP_31_XOP_STBX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 1, 1);
- break;
+ if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
+ kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
- case OP_31_XOP_STBUX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 1, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_LHAX:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
- break;
-
- case OP_31_XOP_LHAUX:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_LHZX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
break;
+ }
+ case STORE:
+ /* if need byte reverse, op.val has been reversed by
+ * analyse_instr().
+ */
+ emulated = kvmppc_handle_store(run, vcpu, op.val,
+ size, 1);
- case OP_31_XOP_LHZUX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
+ if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
+ kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
- case OP_31_XOP_STHX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 2, 1);
- break;
-
- case OP_31_XOP_STHUX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
break;
-
- case OP_31_XOP_DCBST:
- case OP_31_XOP_DCBF:
- case OP_31_XOP_DCBI:
+ case CACHEOP:
/* Do nothing. The guest is performing dcbi because
* hardware DMA is not snooped by the dcache, but
* emulated DMA either goes through the dcache as
* normal writes, or the host kernel has handled dcache
- * coherence. */
- break;
-
- case OP_31_XOP_LWBRX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0);
- break;
-
- case OP_31_XOP_STWBRX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 4, 0);
- break;
-
- case OP_31_XOP_LHBRX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0);
+ * coherence.
+ */
+ emulated = EMULATE_DONE;
break;
-
- case OP_31_XOP_STHBRX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 2, 0);
- break;
-
- case OP_31_XOP_LDBRX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 8, 0);
- break;
-
- case OP_31_XOP_STDBRX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 8, 0);
- break;
-
- case OP_31_XOP_LDX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
- break;
-
- case OP_31_XOP_LDUX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ default:
break;
+ }
+ }
- case OP_31_XOP_LWAX:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
- break;
- case OP_31_XOP_LWAUX:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
+ if ((emulated == EMULATE_DONE) || (emulated == EMULATE_DO_MMIO))
+ goto out;
+ switch (get_op(inst)) {
+ case 31:
+ switch (get_xop(inst)) {
#ifdef CONFIG_PPC_FPU
case OP_31_XOP_LFSX:
if (kvmppc_check_fp_disabled(vcpu))
@@ -502,10 +421,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
}
break;
- case OP_LWZ:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
- break;
-
#ifdef CONFIG_PPC_FPU
case OP_STFS:
if (kvmppc_check_fp_disabled(vcpu))
@@ -542,110 +457,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
8, 1);
kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
break;
-#endif
-
- case OP_LD:
- rt = get_rt(inst);
- switch (inst & 3) {
- case 0: /* ld */
- emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
- break;
- case 1: /* ldu */
- emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
- case 2: /* lwa */
- emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
- break;
- default:
- emulated = EMULATE_FAIL;
- }
- break;
-
- case OP_LWZU:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_LBZ:
- emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
- break;
- case OP_LBZU:
- emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_STW:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 4, 1);
- break;
-
- case OP_STD:
- rs = get_rs(inst);
- switch (inst & 3) {
- case 0: /* std */
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 8, 1);
- break;
- case 1: /* stdu */
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 8, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
- default:
- emulated = EMULATE_FAIL;
- }
- break;
-
- case OP_STWU:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_STB:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 1, 1);
- break;
-
- case OP_STBU:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 1, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_LHZ:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
- break;
-
- case OP_LHZU:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_LHA:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
- break;
-
- case OP_LHAU:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_STH:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 2, 1);
- break;
-
- case OP_STHU:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs), 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
-#ifdef CONFIG_PPC_FPU
case OP_LFS:
if (kvmppc_check_fp_disabled(vcpu))
return EMULATE_DONE;
@@ -684,6 +496,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
break;
}
+out:
if (emulated == EMULATE_FAIL) {
advance = 0;
kvmppc_core_queue_program(vcpu, 0);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 3/7] KVM: PPC: add giveup_ext() hook for PPC KVM ops
2018-05-21 5:24 [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr() wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 1/7] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 2/7] KVM: PPC: reimplement non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input wei.guo.simon
@ 2018-05-21 5:24 ` wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 4/7] KVM: PPC: reimplement LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input wei.guo.simon
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: wei.guo.simon @ 2018-05-21 5:24 UTC (permalink / raw)
To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo
From: Simon Guo <wei.guo.simon@gmail.com>
Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
PR KVM will only save math regs when qemu task switch out of CPU, or
when returning from qemu code.
To emulate FP/VEC/VSX mmio load, PR KVM need to make sure that math
regs were flushed firstly and then be able to update saved VCPU
FPR/VEC/VSX area reasonably.
This patch adds giveup_ext() field to KVM ops and PR KVM has non-NULL
giveup_ext() ops. kvmppc_complete_mmio_load() can invoke that hook
(when not NULL) to flush math regs accordingly, before updating saved
register vals.
Math regs flush is also necessary for STORE, which will be covered
in later patch within this patch series.
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
arch/powerpc/include/asm/kvm_ppc.h | 1 +
arch/powerpc/kvm/book3s_pr.c | 1 +
arch/powerpc/kvm/powerpc.c | 9 +++++++++
3 files changed, 11 insertions(+)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 139cdf0..1f087c4 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -324,6 +324,7 @@ struct kvmppc_ops {
int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info);
int (*set_smt_mode)(struct kvm *kvm, unsigned long mode,
unsigned long flags);
+ void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr);
};
extern struct kvmppc_ops *kvmppc_hv_ops;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 67061d3..be26636 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1782,6 +1782,7 @@ static long kvm_arch_vm_ioctl_pr(struct file *filp,
#ifdef CONFIG_PPC_BOOK3S_64
.hcall_implemented = kvmppc_hcall_impl_pr,
#endif
+ .giveup_ext = kvmppc_giveup_ext,
};
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 45daf3b..8ce9e7b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
break;
case KVM_MMIO_REG_FPR:
+ if (vcpu->kvm->arch.kvm_ops->giveup_ext)
+ vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
+
VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
break;
#ifdef CONFIG_PPC_BOOK3S
@@ -1074,6 +1077,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
#endif
#ifdef CONFIG_VSX
case KVM_MMIO_REG_VSX:
+ if (vcpu->kvm->arch.kvm_ops->giveup_ext)
+ vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX);
+
if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD)
kvmppc_set_vsr_dword(vcpu, gpr);
else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD)
@@ -1088,6 +1094,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
#endif
#ifdef CONFIG_ALTIVEC
case KVM_MMIO_REG_VMX:
+ if (vcpu->kvm->arch.kvm_ops->giveup_ext)
+ vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC);
+
kvmppc_set_vmx_dword(vcpu, gpr);
break;
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 4/7] KVM: PPC: reimplement LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input
2018-05-21 5:24 [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr() wei.guo.simon
` (2 preceding siblings ...)
2018-05-21 5:24 ` [PATCH v3 3/7] KVM: PPC: add giveup_ext() hook for PPC KVM ops wei.guo.simon
@ 2018-05-21 5:24 ` wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 5/7] KVM: PPC: reimplements LOAD_VSX/STORE_VSX " wei.guo.simon
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: wei.guo.simon @ 2018-05-21 5:24 UTC (permalink / raw)
To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo
From: Simon Guo <wei.guo.simon@gmail.com>
This patch reimplements LOAD_FP/STORE_FP instruction MMIO emulation with
analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
accordingly.
For FP store MMIO emulation, the FP regs need to be flushed firstly so
that the right FP reg vals can be read from vcpu->arch.fpr, which will
be stored into MMIO data.
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
arch/powerpc/kvm/emulate_loadstore.c | 201 ++++++++---------------------------
1 file changed, 44 insertions(+), 157 deletions(-)
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index af7c71a..5d38f95 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -138,6 +138,26 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
break;
}
+#ifdef CONFIG_PPC_FPU
+ case LOAD_FP:
+ if (kvmppc_check_fp_disabled(vcpu))
+ return EMULATE_DONE;
+
+ if (op.type & FPCONV)
+ vcpu->arch.mmio_sp64_extend = 1;
+
+ if (op.type & SIGNEXT)
+ emulated = kvmppc_handle_loads(run, vcpu,
+ KVM_MMIO_REG_FPR|op.reg, size, 1);
+ else
+ emulated = kvmppc_handle_load(run, vcpu,
+ KVM_MMIO_REG_FPR|op.reg, size, 1);
+
+ if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
+ kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
+
+ break;
+#endif
case STORE:
/* if need byte reverse, op.val has been reversed by
* analyse_instr().
@@ -149,6 +169,30 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
break;
+#ifdef CONFIG_PPC_FPU
+ case STORE_FP:
+ if (kvmppc_check_fp_disabled(vcpu))
+ return EMULATE_DONE;
+
+ /* The FP registers need to be flushed so that
+ * kvmppc_handle_store() can read actual FP vals
+ * from vcpu->arch.
+ */
+ if (vcpu->kvm->arch.kvm_ops->giveup_ext)
+ vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+ MSR_FP);
+
+ if (op.type & FPCONV)
+ vcpu->arch.mmio_sp64_extend = 1;
+
+ emulated = kvmppc_handle_store(run, vcpu,
+ VCPU_FPR(vcpu, op.reg), size, 1);
+
+ if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
+ kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
+
+ break;
+#endif
case CACHEOP:
/* Do nothing. The guest is performing dcbi because
* hardware DMA is not snooped by the dcache, but
@@ -170,93 +214,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
switch (get_op(inst)) {
case 31:
switch (get_xop(inst)) {
-#ifdef CONFIG_PPC_FPU
- case OP_31_XOP_LFSX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 4, 1);
- break;
-
- case OP_31_XOP_LFSUX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_LFDX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 8, 1);
- break;
-
- case OP_31_XOP_LFDUX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 8, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_LFIWAX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_loads(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 4, 1);
- break;
-
- case OP_31_XOP_LFIWZX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 4, 1);
- break;
-
- case OP_31_XOP_STFSX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs), 4, 1);
- break;
-
- case OP_31_XOP_STFSUX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs), 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_STFDX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs), 8, 1);
- break;
-
- case OP_31_XOP_STFDUX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs), 8, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_STFIWX:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs), 4, 1);
- break;
-#endif
-
#ifdef CONFIG_VSX
case OP_31_XOP_LXSDX:
if (kvmppc_check_vsx_disabled(vcpu))
@@ -421,76 +378,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
}
break;
-#ifdef CONFIG_PPC_FPU
- case OP_STFS:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs),
- 4, 1);
- break;
-
- case OP_STFSU:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs),
- 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_STFD:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs),
- 8, 1);
- break;
-
- case OP_STFDU:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_store(run, vcpu,
- VCPU_FPR(vcpu, rs),
- 8, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_LFS:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 4, 1);
- break;
-
- case OP_LFSU:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_LFD:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 8, 1);
- break;
-
- case OP_LFDU:
- if (kvmppc_check_fp_disabled(vcpu))
- return EMULATE_DONE;
- emulated = kvmppc_handle_load(run, vcpu,
- KVM_MMIO_REG_FPR|rt, 8, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-#endif
-
default:
emulated = EMULATE_FAIL;
break;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 5/7] KVM: PPC: reimplements LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
2018-05-21 5:24 [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr() wei.guo.simon
` (3 preceding siblings ...)
2018-05-21 5:24 ` [PATCH v3 4/7] KVM: PPC: reimplement LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input wei.guo.simon
@ 2018-05-21 5:24 ` wei.guo.simon
2018-05-22 9:41 ` Paul Mackerras
2018-05-21 5:24 ` [PATCH v3 6/7] KVM: PPC: expand mmio_vsx_copy_type to mmio_copy_type to cover VMX load/store elem types wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 7/7] KVM: PPC: reimplements LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input wei.guo.simon
6 siblings, 1 reply; 10+ messages in thread
From: wei.guo.simon @ 2018-05-21 5:24 UTC (permalink / raw)
To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo
From: Simon Guo <wei.guo.simon@gmail.com>
This patch reimplements LOAD_VSX/STORE_VSX instruction MMIO emulation with
analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
by analyse_instr() and handle accordingly.
When emulating VSX store, the VSX reg will need to be flushed so that
the right reg val can be retrieved before writing to IO MEM.
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
arch/powerpc/kvm/emulate_loadstore.c | 227 ++++++++++++++---------------------
1 file changed, 91 insertions(+), 136 deletions(-)
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 5d38f95..ed73497 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -158,6 +158,54 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
break;
#endif
+#ifdef CONFIG_VSX
+ case LOAD_VSX: {
+ int io_size_each;
+
+ if (op.vsx_flags & VSX_CHECK_VEC) {
+ if (kvmppc_check_altivec_disabled(vcpu))
+ return EMULATE_DONE;
+ } else {
+ if (kvmppc_check_vsx_disabled(vcpu))
+ return EMULATE_DONE;
+ }
+
+ if (op.vsx_flags & VSX_FPCONV)
+ vcpu->arch.mmio_sp64_extend = 1;
+
+ if (op.element_size == 8) {
+ if (op.vsx_flags & VSX_SPLAT)
+ vcpu->arch.mmio_vsx_copy_type =
+ KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
+ else
+ vcpu->arch.mmio_vsx_copy_type =
+ KVMPPC_VSX_COPY_DWORD;
+ } else if (op.element_size == 4) {
+ if (op.vsx_flags & VSX_SPLAT)
+ vcpu->arch.mmio_vsx_copy_type =
+ KVMPPC_VSX_COPY_WORD_LOAD_DUMP;
+ else
+ vcpu->arch.mmio_vsx_copy_type =
+ KVMPPC_VSX_COPY_WORD;
+ } else
+ break;
+
+ if (size < op.element_size) {
+ /* precision convert case: lxsspx, etc */
+ vcpu->arch.mmio_vsx_copy_nums = 1;
+ io_size_each = size;
+ } else { /* lxvw4x, lxvd2x, etc */
+ vcpu->arch.mmio_vsx_copy_nums =
+ size/op.element_size;
+ io_size_each = op.element_size;
+ }
+
+ emulated = kvmppc_handle_vsx_load(run, vcpu,
+ KVM_MMIO_REG_VSX|op.reg, io_size_each,
+ 1, op.type & SIGNEXT);
+ break;
+ }
+#endif
case STORE:
/* if need byte reverse, op.val has been reversed by
* analyse_instr().
@@ -193,6 +241,49 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
break;
#endif
+#ifdef CONFIG_VSX
+ case STORE_VSX: {
+ int io_size_each;
+
+ if (op.vsx_flags & VSX_CHECK_VEC) {
+ if (kvmppc_check_altivec_disabled(vcpu))
+ return EMULATE_DONE;
+ } else {
+ if (kvmppc_check_vsx_disabled(vcpu))
+ return EMULATE_DONE;
+ }
+
+ if (vcpu->kvm->arch.kvm_ops->giveup_ext)
+ vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+ MSR_VSX);
+
+ if (op.vsx_flags & VSX_FPCONV)
+ vcpu->arch.mmio_sp64_extend = 1;
+
+ if (op.element_size == 8)
+ vcpu->arch.mmio_vsx_copy_type =
+ KVMPPC_VSX_COPY_DWORD;
+ else if (op.element_size == 4)
+ vcpu->arch.mmio_vsx_copy_type =
+ KVMPPC_VSX_COPY_WORD;
+ else
+ break;
+
+ if (size < op.element_size) {
+ /* precise conversion case, like stxsspx */
+ vcpu->arch.mmio_vsx_copy_nums = 1;
+ io_size_each = size;
+ } else { /* stxvw4x, stxvd2x, etc */
+ vcpu->arch.mmio_vsx_copy_nums =
+ size/op.element_size;
+ io_size_each = op.element_size;
+ }
+
+ emulated = kvmppc_handle_vsx_store(run, vcpu,
+ op.reg, io_size_each, 1);
+ break;
+ }
+#endif
case CACHEOP:
/* Do nothing. The guest is performing dcbi because
* hardware DMA is not snooped by the dcache, but
@@ -214,142 +305,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
switch (get_op(inst)) {
case 31:
switch (get_xop(inst)) {
-#ifdef CONFIG_VSX
- case OP_31_XOP_LXSDX:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 1;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
- emulated = kvmppc_handle_vsx_load(run, vcpu,
- KVM_MMIO_REG_VSX|rt, 8, 1, 0);
- break;
-
- case OP_31_XOP_LXSSPX:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 1;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_vsx_load(run, vcpu,
- KVM_MMIO_REG_VSX|rt, 4, 1, 0);
- break;
-
- case OP_31_XOP_LXSIWAX:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 1;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
- emulated = kvmppc_handle_vsx_load(run, vcpu,
- KVM_MMIO_REG_VSX|rt, 4, 1, 1);
- break;
-
- case OP_31_XOP_LXSIWZX:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 1;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
- emulated = kvmppc_handle_vsx_load(run, vcpu,
- KVM_MMIO_REG_VSX|rt, 4, 1, 0);
- break;
-
- case OP_31_XOP_LXVD2X:
- /*
- * In this case, the official load/store process is like this:
- * Step1, exit from vm by page fault isr, then kvm save vsr.
- * Please see guest_exit_cont->store_fp_state->SAVE_32VSRS
- * as reference.
- *
- * Step2, copy data between memory and VCPU
- * Notice: for LXVD2X/STXVD2X/LXVW4X/STXVW4X, we use
- * 2copies*8bytes or 4copies*4bytes
- * to simulate one copy of 16bytes.
- * Also there is an endian issue here, we should notice the
- * layout of memory.
- * Please see MARCO of LXVD2X_ROT/STXVD2X_ROT as more reference.
- * If host is little-endian, kvm will call XXSWAPD for
- * LXVD2X_ROT/STXVD2X_ROT.
- * So, if host is little-endian,
- * the postion of memeory should be swapped.
- *
- * Step3, return to guest, kvm reset register.
- * Please see kvmppc_hv_entry->load_fp_state->REST_32VSRS
- * as reference.
- */
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 2;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
- emulated = kvmppc_handle_vsx_load(run, vcpu,
- KVM_MMIO_REG_VSX|rt, 8, 1, 0);
- break;
-
- case OP_31_XOP_LXVW4X:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 4;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
- emulated = kvmppc_handle_vsx_load(run, vcpu,
- KVM_MMIO_REG_VSX|rt, 4, 1, 0);
- break;
-
- case OP_31_XOP_LXVDSX:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 1;
- vcpu->arch.mmio_vsx_copy_type =
- KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
- emulated = kvmppc_handle_vsx_load(run, vcpu,
- KVM_MMIO_REG_VSX|rt, 8, 1, 0);
- break;
-
- case OP_31_XOP_STXSDX:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 1;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
- emulated = kvmppc_handle_vsx_store(run, vcpu,
- rs, 8, 1);
- break;
-
- case OP_31_XOP_STXSSPX:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 1;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
- vcpu->arch.mmio_sp64_extend = 1;
- emulated = kvmppc_handle_vsx_store(run, vcpu,
- rs, 4, 1);
- break;
-
- case OP_31_XOP_STXSIWX:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_offset = 1;
- vcpu->arch.mmio_vsx_copy_nums = 1;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
- emulated = kvmppc_handle_vsx_store(run, vcpu,
- rs, 4, 1);
- break;
-
- case OP_31_XOP_STXVD2X:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 2;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
- emulated = kvmppc_handle_vsx_store(run, vcpu,
- rs, 8, 1);
- break;
-
- case OP_31_XOP_STXVW4X:
- if (kvmppc_check_vsx_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.mmio_vsx_copy_nums = 4;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
- emulated = kvmppc_handle_vsx_store(run, vcpu,
- rs, 4, 1);
- break;
-#endif /* CONFIG_VSX */
-
#ifdef CONFIG_ALTIVEC
case OP_31_XOP_LVX:
if (kvmppc_check_altivec_disabled(vcpu))
--
1.8.3.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 6/7] KVM: PPC: expand mmio_vsx_copy_type to mmio_copy_type to cover VMX load/store elem types
2018-05-21 5:24 [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr() wei.guo.simon
` (4 preceding siblings ...)
2018-05-21 5:24 ` [PATCH v3 5/7] KVM: PPC: reimplements LOAD_VSX/STORE_VSX " wei.guo.simon
@ 2018-05-21 5:24 ` wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 7/7] KVM: PPC: reimplements LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input wei.guo.simon
6 siblings, 0 replies; 10+ messages in thread
From: wei.guo.simon @ 2018-05-21 5:24 UTC (permalink / raw)
To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo
From: Simon Guo <wei.guo.simon@gmail.com>
VSX MMIO emulation uses mmio_vsx_copy_type to represent VSX emulated
element size/type, such as KVMPPC_VSX_COPY_DWORD_LOAD, etc. This
patch expands mmio_vsx_copy_type to cover VMX copy type, such as
KVMPPC_VMX_COPY_BYTE(stvebx/lvebx), etc. As a result,
mmio_vsx_copy_type is also renamed to mmio_copy_type.
It is a preparation for reimplement VMX MMIO emulation.
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
arch/powerpc/include/asm/kvm_host.h | 9 +++++++--
arch/powerpc/kvm/emulate_loadstore.c | 14 +++++++-------
arch/powerpc/kvm/powerpc.c | 10 +++++-----
3 files changed, 19 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 4bade29..fe506c8 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -455,6 +455,11 @@ struct mmio_hpte_cache {
#define KVMPPC_VSX_COPY_DWORD_LOAD_DUMP 3
#define KVMPPC_VSX_COPY_WORD_LOAD_DUMP 4
+#define KVMPPC_VMX_COPY_BYTE 8
+#define KVMPPC_VMX_COPY_HWORD 9
+#define KVMPPC_VMX_COPY_WORD 10
+#define KVMPPC_VMX_COPY_DWORD 11
+
struct openpic;
/* W0 and W1 of a XIVE thread management context */
@@ -677,16 +682,16 @@ struct kvm_vcpu_arch {
* Number of simulations for vsx.
* If we use 2*8bytes to simulate 1*16bytes,
* then the number should be 2 and
- * mmio_vsx_copy_type=KVMPPC_VSX_COPY_DWORD.
+ * mmio_copy_type=KVMPPC_VSX_COPY_DWORD.
* If we use 4*4bytes to simulate 1*16bytes,
* the number should be 4 and
* mmio_vsx_copy_type=KVMPPC_VSX_COPY_WORD.
*/
u8 mmio_vsx_copy_nums;
u8 mmio_vsx_offset;
- u8 mmio_vsx_copy_type;
u8 mmio_vsx_tx_sx_enabled;
u8 mmio_vmx_copy_nums;
+ u8 mmio_copy_type;
u8 osi_needed;
u8 osi_enabled;
u8 papr_enabled;
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index ed73497..fa4de6b 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -109,7 +109,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
vcpu->arch.mmio_vsx_tx_sx_enabled = get_tx_or_sx(inst);
vcpu->arch.mmio_vsx_copy_nums = 0;
vcpu->arch.mmio_vsx_offset = 0;
- vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_NONE;
+ vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_NONE;
vcpu->arch.mmio_sp64_extend = 0;
vcpu->arch.mmio_sign_extend = 0;
vcpu->arch.mmio_vmx_copy_nums = 0;
@@ -175,17 +175,17 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
if (op.element_size == 8) {
if (op.vsx_flags & VSX_SPLAT)
- vcpu->arch.mmio_vsx_copy_type =
+ vcpu->arch.mmio_copy_type =
KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
else
- vcpu->arch.mmio_vsx_copy_type =
+ vcpu->arch.mmio_copy_type =
KVMPPC_VSX_COPY_DWORD;
} else if (op.element_size == 4) {
if (op.vsx_flags & VSX_SPLAT)
- vcpu->arch.mmio_vsx_copy_type =
+ vcpu->arch.mmio_copy_type =
KVMPPC_VSX_COPY_WORD_LOAD_DUMP;
else
- vcpu->arch.mmio_vsx_copy_type =
+ vcpu->arch.mmio_copy_type =
KVMPPC_VSX_COPY_WORD;
} else
break;
@@ -261,10 +261,10 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
vcpu->arch.mmio_sp64_extend = 1;
if (op.element_size == 8)
- vcpu->arch.mmio_vsx_copy_type =
+ vcpu->arch.mmio_copy_type =
KVMPPC_VSX_COPY_DWORD;
else if (op.element_size == 4)
- vcpu->arch.mmio_vsx_copy_type =
+ vcpu->arch.mmio_copy_type =
KVMPPC_VSX_COPY_WORD;
else
break;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 8ce9e7b..1580bd2 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1080,14 +1080,14 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
if (vcpu->kvm->arch.kvm_ops->giveup_ext)
vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX);
- if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD)
+ if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_DWORD)
kvmppc_set_vsr_dword(vcpu, gpr);
- else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD)
+ else if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_WORD)
kvmppc_set_vsr_word(vcpu, gpr);
- else if (vcpu->arch.mmio_vsx_copy_type ==
+ else if (vcpu->arch.mmio_copy_type ==
KVMPPC_VSX_COPY_DWORD_LOAD_DUMP)
kvmppc_set_vsr_dword_dump(vcpu, gpr);
- else if (vcpu->arch.mmio_vsx_copy_type ==
+ else if (vcpu->arch.mmio_copy_type ==
KVMPPC_VSX_COPY_WORD_LOAD_DUMP)
kvmppc_set_vsr_word_dump(vcpu, gpr);
break;
@@ -1260,7 +1260,7 @@ static inline int kvmppc_get_vsr_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
u32 dword_offset, word_offset;
union kvmppc_one_reg reg;
int vsx_offset = 0;
- int copy_type = vcpu->arch.mmio_vsx_copy_type;
+ int copy_type = vcpu->arch.mmio_copy_type;
int result = 0;
switch (copy_type) {
--
1.8.3.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 7/7] KVM: PPC: reimplements LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input
2018-05-21 5:24 [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr() wei.guo.simon
` (5 preceding siblings ...)
2018-05-21 5:24 ` [PATCH v3 6/7] KVM: PPC: expand mmio_vsx_copy_type to mmio_copy_type to cover VMX load/store elem types wei.guo.simon
@ 2018-05-21 5:24 ` wei.guo.simon
6 siblings, 0 replies; 10+ messages in thread
From: wei.guo.simon @ 2018-05-21 5:24 UTC (permalink / raw)
To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo
From: Simon Guo <wei.guo.simon@gmail.com>
This patch reimplements LOAD_VMX/STORE_VMX MMIO emulation with
analyse_intr() input. When emulating the store, the VMX reg will need to
be flushed so that the right reg val can be retrieved before writing to
IO MEM.
This patch also adds support for lvebx/lvehx/lvewx/stvebx/stvehx/stvewx
MMIO emulation. To meet the requirement of handling different element
sizes, kvmppc_handle_load128_by2x64()/kvmppc_handle_store128_by2x64()
were replaced with kvmppc_handle_vmx_load()/kvmppc_handle_vmx_store().
The framework used is the similar with VSX instruction mmio emulation.
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/include/asm/kvm_ppc.h | 10 +-
arch/powerpc/kvm/emulate_loadstore.c | 124 +++++++++++------
arch/powerpc/kvm/powerpc.c | 259 ++++++++++++++++++++++++++++-------
4 files changed, 302 insertions(+), 92 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index fe506c8..8dc5e43 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -691,6 +691,7 @@ struct kvm_vcpu_arch {
u8 mmio_vsx_offset;
u8 mmio_vsx_tx_sx_enabled;
u8 mmio_vmx_copy_nums;
+ u8 mmio_vmx_offset;
u8 mmio_copy_type;
u8 osi_needed;
u8 osi_enabled;
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 1f087c4..e991821 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -81,10 +81,10 @@ extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int rt, unsigned int bytes,
int is_default_endian, int mmio_sign_extend);
-extern int kvmppc_handle_load128_by2x64(struct kvm_run *run,
- struct kvm_vcpu *vcpu, unsigned int rt, int is_default_endian);
-extern int kvmppc_handle_store128_by2x64(struct kvm_run *run,
- struct kvm_vcpu *vcpu, unsigned int rs, int is_default_endian);
+extern int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ unsigned int rt, unsigned int bytes, int is_default_endian);
+extern int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ unsigned int rs, unsigned int bytes, int is_default_endian);
extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
u64 val, unsigned int bytes,
int is_default_endian);
@@ -265,6 +265,8 @@ extern int kvmppc_xics_get_xive(struct kvm *kvm, u32 irq, u32 *server,
vector128 vval;
u64 vsxval[2];
u32 vsx32val[4];
+ u16 vsx16val[8];
+ u8 vsx8val[16];
struct {
u64 addr;
u64 length;
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index fa4de6b..b6b2d25 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -113,6 +113,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
vcpu->arch.mmio_sp64_extend = 0;
vcpu->arch.mmio_sign_extend = 0;
vcpu->arch.mmio_vmx_copy_nums = 0;
+ vcpu->arch.mmio_vmx_offset = 0;
vcpu->arch.mmio_host_swabbed = 0;
emulated = EMULATE_FAIL;
@@ -158,6 +159,46 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
break;
#endif
+#ifdef CONFIG_ALTIVEC
+ case LOAD_VMX:
+ if (kvmppc_check_altivec_disabled(vcpu))
+ return EMULATE_DONE;
+
+ /* Hardware enforces alignment of VMX accesses */
+ vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
+ vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
+
+ if (size == 16) { /* lvx */
+ vcpu->arch.mmio_copy_type =
+ KVMPPC_VMX_COPY_DWORD;
+ } else if (size == 4) { /* lvewx */
+ vcpu->arch.mmio_copy_type =
+ KVMPPC_VMX_COPY_WORD;
+ } else if (size == 2) { /* lvehx */
+ vcpu->arch.mmio_copy_type =
+ KVMPPC_VMX_COPY_HWORD;
+ } else if (size == 1) { /* lvebx */
+ vcpu->arch.mmio_copy_type =
+ KVMPPC_VMX_COPY_BYTE;
+ } else
+ break;
+
+ vcpu->arch.mmio_vmx_offset =
+ (vcpu->arch.vaddr_accessed & 0xf)/size;
+
+ if (size == 16) {
+ vcpu->arch.mmio_vmx_copy_nums = 2;
+ emulated = kvmppc_handle_vmx_load(run,
+ vcpu, KVM_MMIO_REG_VMX|op.reg,
+ 8, 1);
+ } else {
+ vcpu->arch.mmio_vmx_copy_nums = 1;
+ emulated = kvmppc_handle_vmx_load(run, vcpu,
+ KVM_MMIO_REG_VMX|op.reg,
+ size, 1);
+ }
+ break;
+#endif
#ifdef CONFIG_VSX
case LOAD_VSX: {
int io_size_each;
@@ -241,6 +282,48 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
break;
#endif
+#ifdef CONFIG_ALTIVEC
+ case STORE_VMX:
+ if (kvmppc_check_altivec_disabled(vcpu))
+ return EMULATE_DONE;
+
+ /* Hardware enforces alignment of VMX accesses. */
+ vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
+ vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
+
+ if (vcpu->kvm->arch.kvm_ops->giveup_ext)
+ vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+ MSR_VEC);
+ if (size == 16) { /* stvx */
+ vcpu->arch.mmio_copy_type =
+ KVMPPC_VMX_COPY_DWORD;
+ } else if (size == 4) { /* stvewx */
+ vcpu->arch.mmio_copy_type =
+ KVMPPC_VMX_COPY_WORD;
+ } else if (size == 2) { /* stvehx */
+ vcpu->arch.mmio_copy_type =
+ KVMPPC_VMX_COPY_HWORD;
+ } else if (size == 1) { /* stvebx */
+ vcpu->arch.mmio_copy_type =
+ KVMPPC_VMX_COPY_BYTE;
+ } else
+ break;
+
+ vcpu->arch.mmio_vmx_offset =
+ (vcpu->arch.vaddr_accessed & 0xf)/size;
+
+ if (size == 16) {
+ vcpu->arch.mmio_vmx_copy_nums = 2;
+ emulated = kvmppc_handle_vmx_store(run,
+ vcpu, op.reg, 8, 1);
+ } else {
+ vcpu->arch.mmio_vmx_copy_nums = 1;
+ emulated = kvmppc_handle_vmx_store(run,
+ vcpu, op.reg, size, 1);
+ }
+
+ break;
+#endif
#ifdef CONFIG_VSX
case STORE_VSX: {
int io_size_each;
@@ -298,47 +381,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
}
}
-
- if ((emulated == EMULATE_DONE) || (emulated == EMULATE_DO_MMIO))
- goto out;
-
- switch (get_op(inst)) {
- case 31:
- switch (get_xop(inst)) {
-#ifdef CONFIG_ALTIVEC
- case OP_31_XOP_LVX:
- if (kvmppc_check_altivec_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.vaddr_accessed &= ~0xFULL;
- vcpu->arch.paddr_accessed &= ~0xFULL;
- vcpu->arch.mmio_vmx_copy_nums = 2;
- emulated = kvmppc_handle_load128_by2x64(run, vcpu,
- KVM_MMIO_REG_VMX|rt, 1);
- break;
-
- case OP_31_XOP_STVX:
- if (kvmppc_check_altivec_disabled(vcpu))
- return EMULATE_DONE;
- vcpu->arch.vaddr_accessed &= ~0xFULL;
- vcpu->arch.paddr_accessed &= ~0xFULL;
- vcpu->arch.mmio_vmx_copy_nums = 2;
- emulated = kvmppc_handle_store128_by2x64(run, vcpu,
- rs, 1);
- break;
-#endif /* CONFIG_ALTIVEC */
-
- default:
- emulated = EMULATE_FAIL;
- break;
- }
- break;
-
- default:
- emulated = EMULATE_FAIL;
- break;
- }
-
-out:
if (emulated == EMULATE_FAIL) {
advance = 0;
kvmppc_core_queue_program(vcpu, 0);
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 1580bd2..05eccdc 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -953,30 +953,110 @@ static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu,
#endif /* CONFIG_VSX */
#ifdef CONFIG_ALTIVEC
+static inline int kvmppc_get_vmx_offset_generic(struct kvm_vcpu *vcpu,
+ int index, int element_size)
+{
+ int offset;
+ int elts = sizeof(vector128)/element_size;
+
+ if ((index < 0) || (index >= elts))
+ return -1;
+
+ if (kvmppc_need_byteswap(vcpu))
+ offset = elts - index - 1;
+ else
+ offset = index;
+
+ return offset;
+}
+
+static inline int kvmppc_get_vmx_dword_offset(struct kvm_vcpu *vcpu,
+ int index)
+{
+ return kvmppc_get_vmx_offset_generic(vcpu, index, 8);
+}
+
+static inline int kvmppc_get_vmx_word_offset(struct kvm_vcpu *vcpu,
+ int index)
+{
+ return kvmppc_get_vmx_offset_generic(vcpu, index, 4);
+}
+
+static inline int kvmppc_get_vmx_hword_offset(struct kvm_vcpu *vcpu,
+ int index)
+{
+ return kvmppc_get_vmx_offset_generic(vcpu, index, 2);
+}
+
+static inline int kvmppc_get_vmx_byte_offset(struct kvm_vcpu *vcpu,
+ int index)
+{
+ return kvmppc_get_vmx_offset_generic(vcpu, index, 1);
+}
+
+
static inline void kvmppc_set_vmx_dword(struct kvm_vcpu *vcpu,
- u64 gpr)
+ u64 gpr)
{
+ union kvmppc_one_reg val;
+ int offset = kvmppc_get_vmx_dword_offset(vcpu,
+ vcpu->arch.mmio_vmx_offset);
int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK;
- u32 hi, lo;
- u32 di;
-#ifdef __BIG_ENDIAN
- hi = gpr >> 32;
- lo = gpr & 0xffffffff;
-#else
- lo = gpr >> 32;
- hi = gpr & 0xffffffff;
-#endif
+ if (offset == -1)
+ return;
+
+ val.vval = VCPU_VSX_VR(vcpu, index);
+ val.vsxval[offset] = gpr;
+ VCPU_VSX_VR(vcpu, index) = val.vval;
+}
+
+static inline void kvmppc_set_vmx_word(struct kvm_vcpu *vcpu,
+ u32 gpr32)
+{
+ union kvmppc_one_reg val;
+ int offset = kvmppc_get_vmx_word_offset(vcpu,
+ vcpu->arch.mmio_vmx_offset);
+ int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK;
+
+ if (offset == -1)
+ return;
+
+ val.vval = VCPU_VSX_VR(vcpu, index);
+ val.vsx32val[offset] = gpr32;
+ VCPU_VSX_VR(vcpu, index) = val.vval;
+}
- di = 2 - vcpu->arch.mmio_vmx_copy_nums; /* doubleword index */
- if (di > 1)
+static inline void kvmppc_set_vmx_hword(struct kvm_vcpu *vcpu,
+ u16 gpr16)
+{
+ union kvmppc_one_reg val;
+ int offset = kvmppc_get_vmx_hword_offset(vcpu,
+ vcpu->arch.mmio_vmx_offset);
+ int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK;
+
+ if (offset == -1)
return;
- if (vcpu->arch.mmio_host_swabbed)
- di = 1 - di;
+ val.vval = VCPU_VSX_VR(vcpu, index);
+ val.vsx16val[offset] = gpr16;
+ VCPU_VSX_VR(vcpu, index) = val.vval;
+}
+
+static inline void kvmppc_set_vmx_byte(struct kvm_vcpu *vcpu,
+ u8 gpr8)
+{
+ union kvmppc_one_reg val;
+ int offset = kvmppc_get_vmx_byte_offset(vcpu,
+ vcpu->arch.mmio_vmx_offset);
+ int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK;
- VCPU_VSX_VR(vcpu, index).u[di * 2] = hi;
- VCPU_VSX_VR(vcpu, index).u[di * 2 + 1] = lo;
+ if (offset == -1)
+ return;
+
+ val.vval = VCPU_VSX_VR(vcpu, index);
+ val.vsx8val[offset] = gpr8;
+ VCPU_VSX_VR(vcpu, index) = val.vval;
}
#endif /* CONFIG_ALTIVEC */
@@ -1097,7 +1177,16 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
if (vcpu->kvm->arch.kvm_ops->giveup_ext)
vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC);
- kvmppc_set_vmx_dword(vcpu, gpr);
+ if (vcpu->arch.mmio_copy_type == KVMPPC_VMX_COPY_DWORD)
+ kvmppc_set_vmx_dword(vcpu, gpr);
+ else if (vcpu->arch.mmio_copy_type == KVMPPC_VMX_COPY_WORD)
+ kvmppc_set_vmx_word(vcpu, gpr);
+ else if (vcpu->arch.mmio_copy_type ==
+ KVMPPC_VMX_COPY_HWORD)
+ kvmppc_set_vmx_hword(vcpu, gpr);
+ else if (vcpu->arch.mmio_copy_type ==
+ KVMPPC_VMX_COPY_BYTE)
+ kvmppc_set_vmx_byte(vcpu, gpr);
break;
#endif
default:
@@ -1376,14 +1465,16 @@ static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu,
#endif /* CONFIG_VSX */
#ifdef CONFIG_ALTIVEC
-/* handle quadword load access in two halves */
-int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
- unsigned int rt, int is_default_endian)
+int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ unsigned int rt, unsigned int bytes, int is_default_endian)
{
enum emulation_result emulated = EMULATE_DONE;
+ if (vcpu->arch.mmio_vsx_copy_nums > 2)
+ return EMULATE_FAIL;
+
while (vcpu->arch.mmio_vmx_copy_nums) {
- emulated = __kvmppc_handle_load(run, vcpu, rt, 8,
+ emulated = __kvmppc_handle_load(run, vcpu, rt, bytes,
is_default_endian, 0);
if (emulated != EMULATE_DONE)
@@ -1391,55 +1482,127 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
vcpu->arch.paddr_accessed += run->mmio.len;
vcpu->arch.mmio_vmx_copy_nums--;
+ vcpu->arch.mmio_vmx_offset++;
}
return emulated;
}
-static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
+int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val)
{
- vector128 vrs = VCPU_VSX_VR(vcpu, rs);
- u32 di;
- u64 w0, w1;
+ union kvmppc_one_reg reg;
+ int vmx_offset = 0;
+ int result = 0;
+
+ vmx_offset =
+ kvmppc_get_vmx_dword_offset(vcpu, vcpu->arch.mmio_vmx_offset);
- di = 2 - vcpu->arch.mmio_vmx_copy_nums; /* doubleword index */
- if (di > 1)
+ if (vmx_offset == -1)
return -1;
- if (kvmppc_need_byteswap(vcpu))
- di = 1 - di;
+ reg.vval = VCPU_VSX_VR(vcpu, index);
+ *val = reg.vsxval[vmx_offset];
- w0 = vrs.u[di * 2];
- w1 = vrs.u[di * 2 + 1];
+ return result;
+}
-#ifdef __BIG_ENDIAN
- *val = (w0 << 32) | w1;
-#else
- *val = (w1 << 32) | w0;
-#endif
- return 0;
+int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val)
+{
+ union kvmppc_one_reg reg;
+ int vmx_offset = 0;
+ int result = 0;
+
+ vmx_offset =
+ kvmppc_get_vmx_word_offset(vcpu, vcpu->arch.mmio_vmx_offset);
+
+ if (vmx_offset == -1)
+ return -1;
+
+ reg.vval = VCPU_VSX_VR(vcpu, index);
+ *val = reg.vsx32val[vmx_offset];
+
+ return result;
+}
+
+int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val)
+{
+ union kvmppc_one_reg reg;
+ int vmx_offset = 0;
+ int result = 0;
+
+ vmx_offset =
+ kvmppc_get_vmx_hword_offset(vcpu, vcpu->arch.mmio_vmx_offset);
+
+ if (vmx_offset == -1)
+ return -1;
+
+ reg.vval = VCPU_VSX_VR(vcpu, index);
+ *val = reg.vsx16val[vmx_offset];
+
+ return result;
+}
+
+int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val)
+{
+ union kvmppc_one_reg reg;
+ int vmx_offset = 0;
+ int result = 0;
+
+ vmx_offset =
+ kvmppc_get_vmx_byte_offset(vcpu, vcpu->arch.mmio_vmx_offset);
+
+ if (vmx_offset == -1)
+ return -1;
+
+ reg.vval = VCPU_VSX_VR(vcpu, index);
+ *val = reg.vsx8val[vmx_offset];
+
+ return result;
}
-/* handle quadword store in two halves */
-int kvmppc_handle_store128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
- unsigned int rs, int is_default_endian)
+int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ unsigned int rs, unsigned int bytes, int is_default_endian)
{
u64 val = 0;
+ unsigned int index = rs & KVM_MMIO_REG_MASK;
enum emulation_result emulated = EMULATE_DONE;
+ if (vcpu->arch.mmio_vsx_copy_nums > 2)
+ return EMULATE_FAIL;
+
vcpu->arch.io_gpr = rs;
while (vcpu->arch.mmio_vmx_copy_nums) {
- if (kvmppc_get_vmx_data(vcpu, rs, &val) == -1)
+ switch (vcpu->arch.mmio_copy_type) {
+ case KVMPPC_VMX_COPY_DWORD:
+ if (kvmppc_get_vmx_dword(vcpu, index, &val) == -1)
+ return EMULATE_FAIL;
+
+ break;
+ case KVMPPC_VMX_COPY_WORD:
+ if (kvmppc_get_vmx_word(vcpu, index, &val) == -1)
+ return EMULATE_FAIL;
+ break;
+ case KVMPPC_VMX_COPY_HWORD:
+ if (kvmppc_get_vmx_hword(vcpu, index, &val) == -1)
+ return EMULATE_FAIL;
+ break;
+ case KVMPPC_VMX_COPY_BYTE:
+ if (kvmppc_get_vmx_byte(vcpu, index, &val) == -1)
+ return EMULATE_FAIL;
+ break;
+ default:
return EMULATE_FAIL;
+ }
- emulated = kvmppc_handle_store(run, vcpu, val, 8,
+ emulated = kvmppc_handle_store(run, vcpu, val, bytes,
is_default_endian);
if (emulated != EMULATE_DONE)
break;
vcpu->arch.paddr_accessed += run->mmio.len;
vcpu->arch.mmio_vmx_copy_nums--;
+ vcpu->arch.mmio_vmx_offset++;
}
return emulated;
@@ -1454,11 +1617,11 @@ static int kvmppc_emulate_mmio_vmx_loadstore(struct kvm_vcpu *vcpu,
vcpu->arch.paddr_accessed += run->mmio.len;
if (!vcpu->mmio_is_write) {
- emulated = kvmppc_handle_load128_by2x64(run, vcpu,
- vcpu->arch.io_gpr, 1);
+ emulated = kvmppc_handle_vmx_load(run, vcpu,
+ vcpu->arch.io_gpr, run->mmio.len, 1);
} else {
- emulated = kvmppc_handle_store128_by2x64(run, vcpu,
- vcpu->arch.io_gpr, 1);
+ emulated = kvmppc_handle_vmx_store(run, vcpu,
+ vcpu->arch.io_gpr, run->mmio.len, 1);
}
switch (emulated) {
@@ -1602,8 +1765,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
}
#endif
#ifdef CONFIG_ALTIVEC
- if (vcpu->arch.mmio_vmx_copy_nums > 0)
+ if (vcpu->arch.mmio_vmx_copy_nums > 0) {
vcpu->arch.mmio_vmx_copy_nums--;
+ vcpu->arch.mmio_vmx_offset++;
+ }
if (vcpu->arch.mmio_vmx_copy_nums > 0) {
r = kvmppc_emulate_mmio_vmx_loadstore(vcpu, run);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v3 5/7] KVM: PPC: reimplements LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
2018-05-21 5:24 ` [PATCH v3 5/7] KVM: PPC: reimplements LOAD_VSX/STORE_VSX " wei.guo.simon
@ 2018-05-22 9:41 ` Paul Mackerras
2018-05-23 3:06 ` Simon Guo
0 siblings, 1 reply; 10+ messages in thread
From: Paul Mackerras @ 2018-05-22 9:41 UTC (permalink / raw)
To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev
On Mon, May 21, 2018 at 01:24:24PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
>
> This patch reimplements LOAD_VSX/STORE_VSX instruction MMIO emulation with
> analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> by analyse_instr() and handle accordingly.
>
> When emulating VSX store, the VSX reg will need to be flushed so that
> the right reg val can be retrieved before writing to IO MEM.
When I tested this patch set with the MMIO emulation test program I
have, I got a host crash on the first test that used a VSX instruction
with a register number >= 32, that is, a VMX register. The crash was
that it hit the BUG() at line 1193 of arch/powerpc/kvm/powerpc.c.
The reason it hit the BUG() is that vcpu->arch.io_gpr was 0xa3.
What's happening here is that analyse_instr gives a register numbers
in the range 32 - 63 for VSX instructions which access VMX registers.
When 35 is ORed with 0x80 (KVM_MMIO_REG_VSX) we get 0xa3.
The old code didn't pass the high bit of the register number to
kvmppc_handle_vsx_load/store, but instead passed it via the
vcpu->arch.mmio_vsx_tx_sx_enabled field. With your patch set we still
set and use that field, so the patch below on top of your patches is
the quick fix. Ideally we would get rid of that field and just use
the high (0x20) bit of the register number instead, but that can be
cleaned up later.
If you like, I will fold the patch below into this patch and push the
series to my kvm-ppc-next branch.
Paul.
---
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 0165fcd..afde788 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -242,8 +242,8 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
}
emulated = kvmppc_handle_vsx_load(run, vcpu,
- KVM_MMIO_REG_VSX|op.reg, io_size_each,
- 1, op.type & SIGNEXT);
+ KVM_MMIO_REG_VSX | (op.reg & 0x1f),
+ io_size_each, 1, op.type & SIGNEXT);
break;
}
#endif
@@ -363,7 +363,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
}
emulated = kvmppc_handle_vsx_store(run, vcpu,
- op.reg, io_size_each, 1);
+ op.reg & 0x1f, io_size_each, 1);
break;
}
#endif
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v3 5/7] KVM: PPC: reimplements LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
2018-05-22 9:41 ` Paul Mackerras
@ 2018-05-23 3:06 ` Simon Guo
0 siblings, 0 replies; 10+ messages in thread
From: Simon Guo @ 2018-05-23 3:06 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev
Hi Paul,
On Tue, May 22, 2018 at 07:41:51PM +1000, Paul Mackerras wrote:
> On Mon, May 21, 2018 at 01:24:24PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> >
> > This patch reimplements LOAD_VSX/STORE_VSX instruction MMIO emulation with
> > analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> > by analyse_instr() and handle accordingly.
> >
> > When emulating VSX store, the VSX reg will need to be flushed so that
> > the right reg val can be retrieved before writing to IO MEM.
>
> When I tested this patch set with the MMIO emulation test program I
> have, I got a host crash on the first test that used a VSX instruction
> with a register number >= 32, that is, a VMX register. The crash was
> that it hit the BUG() at line 1193 of arch/powerpc/kvm/powerpc.c.
>
> The reason it hit the BUG() is that vcpu->arch.io_gpr was 0xa3.
> What's happening here is that analyse_instr gives a register numbers
> in the range 32 - 63 for VSX instructions which access VMX registers.
> When 35 is ORed with 0x80 (KVM_MMIO_REG_VSX) we get 0xa3.
>
> The old code didn't pass the high bit of the register number to
> kvmppc_handle_vsx_load/store, but instead passed it via the
> vcpu->arch.mmio_vsx_tx_sx_enabled field. With your patch set we still
> set and use that field, so the patch below on top of your patches is
> the quick fix. Ideally we would get rid of that field and just use
> the high (0x20) bit of the register number instead, but that can be
> cleaned up later.
>
> If you like, I will fold the patch below into this patch and push the
> series to my kvm-ppc-next branch.
>
> Paul.
Sorry my test missed this kind of cases. Please go ahead to fold the patch
as you suggested. Thanks for point it out.
If you like, I can do the clean up work. If I understand correctly,
we need to expand io_gpr to u16 from u8 so that reg number can use
6 bits and leave room for other reg flag bits.
BR,
- Simon
> ---
> diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> index 0165fcd..afde788 100644
> --- a/arch/powerpc/kvm/emulate_loadstore.c
> +++ b/arch/powerpc/kvm/emulate_loadstore.c
> @@ -242,8 +242,8 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> }
>
> emulated = kvmppc_handle_vsx_load(run, vcpu,
> - KVM_MMIO_REG_VSX|op.reg, io_size_each,
> - 1, op.type & SIGNEXT);
> + KVM_MMIO_REG_VSX | (op.reg & 0x1f),
> + io_size_each, 1, op.type & SIGNEXT);
> break;
> }
> #endif
> @@ -363,7 +363,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> }
>
> emulated = kvmppc_handle_vsx_store(run, vcpu,
> - op.reg, io_size_each, 1);
> + op.reg & 0x1f, io_size_each, 1);
> break;
> }
> #endif
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-05-23 3:06 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-05-21 5:24 [PATCH v3 0/7] KVM: PPC: reimplement mmio emulation with analyse_instr() wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 1/7] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 2/7] KVM: PPC: reimplement non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 3/7] KVM: PPC: add giveup_ext() hook for PPC KVM ops wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 4/7] KVM: PPC: reimplement LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 5/7] KVM: PPC: reimplements LOAD_VSX/STORE_VSX " wei.guo.simon
2018-05-22 9:41 ` Paul Mackerras
2018-05-23 3:06 ` Simon Guo
2018-05-21 5:24 ` [PATCH v3 6/7] KVM: PPC: expand mmio_vsx_copy_type to mmio_copy_type to cover VMX load/store elem types wei.guo.simon
2018-05-21 5:24 ` [PATCH v3 7/7] KVM: PPC: reimplements LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input wei.guo.simon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).