* [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code
@ 2014-06-22 21:23 Alexander Graf
2014-06-22 21:23 ` [PATCH 01/33] KVM: PPC: Implement kvmppc_xlate for all targets Alexander Graf
` (33 more replies)
0 siblings, 34 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
Howdy,
Ben reminded me a while back that we have a nasty race in our KVM PV code.
We replace a few instructions with longer streams of instructions to check
whether it's necessary to trap out from it (like mtmsr, no need to trap if
we only disable interrupts). During those replacement chunks we must not get
any interrupts, because they might overwrite scratch space that we already
used to save otherwise clobbered register state into.
So we have a thing called "critical sections" which allows us to atomically
get in and out of "interrupt disabled" modes without touching MSR. When we
are supposed to deliver an interrupt into the guest while we are in a critical
section, we just don't inject the interrupt yet, but leave it be until the
next trap.
However, we never really know when the next trap would be. For all we know it
could be never. At this point we created a race that is a potential source
for interrupt loss or at least deferral.
This patch set aims at solving the race. Instead of merely deferring an
interrupt when we see such a situation, we go into a special instruction
interpretation mode. In this mode, we interpret all PPC assembler instructions
that happen until we are out of the critical section again, at which point
we can now inject the interrupt.
This bug only affects KVM implementations that make use of the magic page, so
e500v2, book3s_32 and book3s_64 PR KVM.
Alex
Alexander Graf (33):
KVM: PPC: Implement kvmppc_xlate for all targets
KVM: PPC: Move kvmppc_ld/st to common code
KVM: PPC: Remove kvmppc_bad_hva()
KVM: PPC: Propagate kvmppc_xlate errors properly
KVM: PPC: Use kvm_read_guest in kvmppc_ld
KVM: PPC: Handle magic page in kvmppc_ld/st
KVM: PPC: Separate loadstore emulation from priv emulation
KVM: PPC: Introduce emulation for unprivileged instructions
KVM: PPC: Move critical section detection to common code
KVM: PPC: Make critical section detection conditional
KVM: PPC: BookE: Use common critical section helper
KVM: PPC: Emulate critical sections when we hit them
KVM: PPC: Expose helper functions for data/inst faults
KVM: PPC: Add std instruction emulation
KVM: PPC: Add stw instruction emulation
KVM: PPC: Add ld instruction emulation
KVM: PPC: Add lwz instruction emulation
KVM: PPC: Add mfcr instruction emulation
KVM: PPC: Add addis instruction emulation
KVM: PPC: Add ori instruction emulation
KVM: PPC: Add and instruction emulation
KVM: PPC: Add andi. instruction emulation
KVM: PPC: Add or instruction emulation
KVM: PPC: Add cmpwi/cmpdi instruction emulation
KVM: PPC: Add bc instruction emulation
KVM: PPC: Add mtcrf instruction emulation
KVM: PPC: Add xor instruction emulation
KVM: PPC: Add oris instruction emulation
KVM: PPC: Add rldicr/rldicl/rldic instruction emulation
KVM: PPC: Add rlwimi instruction emulation
KVM: PPC: Add rlwinm instruction emulation
KVM: PPC: Handle NV registers in emulated critical sections
KVM: PPC: Enable critical section emulation
arch/powerpc/include/asm/kvm_book3s.h | 9 +-
arch/powerpc/include/asm/kvm_booke.h | 10 +
arch/powerpc/include/asm/kvm_host.h | 4 +-
arch/powerpc/include/asm/kvm_ppc.h | 29 ++
arch/powerpc/include/asm/ppc-opcode.h | 14 +
arch/powerpc/kvm/Makefile | 4 +-
arch/powerpc/kvm/book3s.c | 142 ++------
arch/powerpc/kvm/book3s_pr.c | 16 +-
arch/powerpc/kvm/booke.c | 120 +++++--
arch/powerpc/kvm/emulate.c | 656 ++++++++++++++++++++++++----------
arch/powerpc/kvm/emulate_loadstore.c | 266 ++++++++++++++
arch/powerpc/kvm/powerpc.c | 123 ++++++-
12 files changed, 1076 insertions(+), 317 deletions(-)
create mode 100644 arch/powerpc/kvm/emulate_loadstore.c
--
1.8.1.4
^ permalink raw reply [flat|nested] 40+ messages in thread
* [PATCH 01/33] KVM: PPC: Implement kvmppc_xlate for all targets
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 02/33] KVM: PPC: Move kvmppc_ld/st to common code Alexander Graf
` (32 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We have a nice API to find the translated GPAs of a GVA including protection
flags. So far we only use it on Book3S, but there's no reason the same shouldn't
be used on BookE as well.
Implement a kvmppc_xlate() version for BookE and clean it up to make it more
readable in general.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_ppc.h | 13 ++++++++++
arch/powerpc/kvm/book3s.c | 12 ++++++---
arch/powerpc/kvm/booke.c | 51 ++++++++++++++++++++++++++++++++++++++
3 files changed, 72 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 9c89cdd..837533a 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -47,6 +47,16 @@ enum emulation_result {
EMULATE_EXIT_USER, /* emulation requires exit to user-space */
};
+enum xlate_instdata {
+ XLATE_INST, /* translate instruction address */
+ XLATE_DATA /* translate data address */
+};
+
+enum xlate_readwrite {
+ XLATE_READ, /* check for read permissions */
+ XLATE_WRITE /* check for write permissions */
+};
+
extern int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
extern void kvmppc_handler_highmem(void);
@@ -86,6 +96,9 @@ extern gpa_t kvmppc_mmu_xlate(struct kvm_vcpu *vcpu, unsigned int gtlb_index,
gva_t eaddr);
extern void kvmppc_mmu_dtlb_miss(struct kvm_vcpu *vcpu);
extern void kvmppc_mmu_itlb_miss(struct kvm_vcpu *vcpu);
+extern int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr,
+ enum xlate_instdata xlid, enum xlate_readwrite xlrw,
+ struct kvmppc_pte *pte);
extern struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm,
unsigned int id);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 90aa5c7..b644f51 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -368,9 +368,11 @@ pfn_t kvmppc_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool writing,
}
EXPORT_SYMBOL_GPL(kvmppc_gfn_to_pfn);
-static int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, bool data,
- bool iswrite, struct kvmppc_pte *pte)
+int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, enum xlate_instdata xlid,
+ enum xlate_readwrite xlrw, struct kvmppc_pte *pte)
{
+ bool data = (xlid == XLATE_DATA);
+ bool iswrite = (xlrw == XLATE_WRITE);
int relocated = (kvmppc_get_msr(vcpu) & (data ? MSR_DR : MSR_IR));
int r;
@@ -421,7 +423,8 @@ int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
vcpu->stat.st++;
- if (kvmppc_xlate(vcpu, *eaddr, data, true, &pte))
+ if (kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
+ XLATE_WRITE, &pte))
return -ENOENT;
*eaddr = pte.raddr;
@@ -444,7 +447,8 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
vcpu->stat.ld++;
- if (kvmppc_xlate(vcpu, *eaddr, data, false, &pte))
+ if (kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
+ XLATE_READ, &pte))
goto nopte;
*eaddr = pte.raddr;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index ab62109..58d9b6c 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1788,6 +1788,57 @@ void kvm_guest_protect_msr(struct kvm_vcpu *vcpu, ulong prot_bitmap, bool set)
#endif
}
+int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, enum xlate_instdata xlid,
+ enum xlate_readwrite xlrw, struct kvmppc_pte *pte)
+{
+ int gtlb_index;
+ gpa_t gpaddr;
+
+#ifdef CONFIG_KVM_E500V2
+ if (!(vcpu->arch.shared->msr & MSR_PR) &&
+ (eaddr & PAGE_MASK) == vcpu->arch.magic_page_ea) {
+ pte->eaddr = eaddr;
+ pte->raddr = (vcpu->arch.magic_page_pa & PAGE_MASK) |
+ (eaddr & ~PAGE_MASK);
+ pte->vpage = eaddr >> PAGE_SHIFT;
+ pte->may_read = true;
+ pte->may_write = true;
+ pte->may_execute = true;
+
+ return 0;
+ }
+#endif
+
+ /* Check the guest TLB. */
+ switch (xlid) {
+ case XLATE_INST:
+ gtlb_index = kvmppc_mmu_itlb_index(vcpu, eaddr);
+ break;
+ case XLATE_DATA:
+ gtlb_index = kvmppc_mmu_dtlb_index(vcpu, eaddr);
+ break;
+ default:
+ BUG();
+ }
+
+ /* Do we have a TLB entry at all? */
+ if (gtlb_index < 0)
+ return -ENOENT;
+
+ gpaddr = kvmppc_mmu_xlate(vcpu, gtlb_index, eaddr);
+
+ pte->eaddr = eaddr;
+ pte->raddr = (gpaddr & PAGE_MASK) | (eaddr & ~PAGE_MASK);
+ pte->vpage = eaddr >> PAGE_SHIFT;
+
+ /* XXX read permissions from the guest TLB */
+ pte->may_read = true;
+ pte->may_write = true;
+ pte->may_execute = true;
+
+ return 0;
+}
+
int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
struct kvm_guest_debug *dbg)
{
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 02/33] KVM: PPC: Move kvmppc_ld/st to common code
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
2014-06-22 21:23 ` [PATCH 01/33] KVM: PPC: Implement kvmppc_xlate for all targets Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 03/33] KVM: PPC: Remove kvmppc_bad_hva() Alexander Graf
` (31 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We have enough common infrastructure now to resolve GVA->GPA mappings at
runtime. With this we can move our book3s specific helpers to load / store
in guest virtual address space to common code as well.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_book3s.h | 2 +-
arch/powerpc/include/asm/kvm_host.h | 4 +-
arch/powerpc/include/asm/kvm_ppc.h | 4 ++
arch/powerpc/kvm/book3s.c | 80 -----------------------------------
arch/powerpc/kvm/powerpc.c | 80 +++++++++++++++++++++++++++++++++++
5 files changed, 87 insertions(+), 83 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index a20cc0b..47a9ca6 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -147,8 +147,8 @@ extern int kvmppc_mmu_hpte_sysinit(void);
extern void kvmppc_mmu_hpte_sysexit(void);
extern int kvmppc_mmu_hv_init(void);
+/* XXX remove this export when load_last_inst() is generic */
extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
-extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec);
extern void kvmppc_book3s_dequeue_irqprio(struct kvm_vcpu *vcpu,
unsigned int vec);
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index f9ae696..6200215 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -113,15 +113,15 @@ struct kvm_vcpu_stat {
u32 halt_wakeup;
u32 dbell_exits;
u32 gdbell_exits;
+ u32 ld;
+ u32 st;
#ifdef CONFIG_PPC_BOOK3S
u32 pf_storage;
u32 pf_instruc;
u32 sp_storage;
u32 sp_instruc;
u32 queue_intr;
- u32 ld;
u32 ld_slow;
- u32 st;
u32 st_slow;
#endif
};
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 837533a..ecc4588 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -72,6 +72,10 @@ extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
u64 val, unsigned int bytes,
int is_default_endian);
+extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
+ bool data);
+extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
+ bool data);
extern int kvmppc_emulate_instruction(struct kvm_run *run,
struct kvm_vcpu *vcpu);
extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index b644f51..9f2a5ec 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -391,86 +391,6 @@ int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, enum xlate_instdata xlid,
return r;
}
-static hva_t kvmppc_bad_hva(void)
-{
- return PAGE_OFFSET;
-}
-
-static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte,
- bool read)
-{
- hva_t hpage;
-
- if (read && !pte->may_read)
- goto err;
-
- if (!read && !pte->may_write)
- goto err;
-
- hpage = gfn_to_hva(vcpu->kvm, pte->raddr >> PAGE_SHIFT);
- if (kvm_is_error_hva(hpage))
- goto err;
-
- return hpage | (pte->raddr & ~PAGE_MASK);
-err:
- return kvmppc_bad_hva();
-}
-
-int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
- bool data)
-{
- struct kvmppc_pte pte;
-
- vcpu->stat.st++;
-
- if (kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
- XLATE_WRITE, &pte))
- return -ENOENT;
-
- *eaddr = pte.raddr;
-
- if (!pte.may_write)
- return -EPERM;
-
- if (kvm_write_guest(vcpu->kvm, pte.raddr, ptr, size))
- return EMULATE_DO_MMIO;
-
- return EMULATE_DONE;
-}
-EXPORT_SYMBOL_GPL(kvmppc_st);
-
-int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
- bool data)
-{
- struct kvmppc_pte pte;
- hva_t hva = *eaddr;
-
- vcpu->stat.ld++;
-
- if (kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
- XLATE_READ, &pte))
- goto nopte;
-
- *eaddr = pte.raddr;
-
- hva = kvmppc_pte_to_hva(vcpu, &pte, true);
- if (kvm_is_error_hva(hva))
- goto mmio;
-
- if (copy_from_user(ptr, (void __user *)hva, size)) {
- printk(KERN_INFO "kvmppc_ld at 0x%lx failed\n", hva);
- goto mmio;
- }
-
- return EMULATE_DONE;
-
-nopte:
- return -ENOENT;
-mmio:
- return EMULATE_DO_MMIO;
-}
-EXPORT_SYMBOL_GPL(kvmppc_ld);
-
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{
return 0;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 61c738a..29e7380 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -284,6 +284,86 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvmppc_emulate_mmio);
+static hva_t kvmppc_bad_hva(void)
+{
+ return PAGE_OFFSET;
+}
+
+static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte,
+ bool read)
+{
+ hva_t hpage;
+
+ if (read && !pte->may_read)
+ goto err;
+
+ if (!read && !pte->may_write)
+ goto err;
+
+ hpage = gfn_to_hva(vcpu->kvm, pte->raddr >> PAGE_SHIFT);
+ if (kvm_is_error_hva(hpage))
+ goto err;
+
+ return hpage | (pte->raddr & ~PAGE_MASK);
+err:
+ return kvmppc_bad_hva();
+}
+
+int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
+ bool data)
+{
+ struct kvmppc_pte pte;
+
+ vcpu->stat.st++;
+
+ if (kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
+ XLATE_WRITE, &pte))
+ return -ENOENT;
+
+ *eaddr = pte.raddr;
+
+ if (!pte.may_write)
+ return -EPERM;
+
+ if (kvm_write_guest(vcpu->kvm, pte.raddr, ptr, size))
+ return EMULATE_DO_MMIO;
+
+ return EMULATE_DONE;
+}
+EXPORT_SYMBOL_GPL(kvmppc_st);
+
+int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
+ bool data)
+{
+ struct kvmppc_pte pte;
+ hva_t hva = *eaddr;
+
+ vcpu->stat.ld++;
+
+ if (kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
+ XLATE_READ, &pte))
+ goto nopte;
+
+ *eaddr = pte.raddr;
+
+ hva = kvmppc_pte_to_hva(vcpu, &pte, true);
+ if (kvm_is_error_hva(hva))
+ goto mmio;
+
+ if (copy_from_user(ptr, (void __user *)hva, size)) {
+ printk(KERN_INFO "kvmppc_ld at 0x%lx failed\n", hva);
+ goto mmio;
+ }
+
+ return EMULATE_DONE;
+
+nopte:
+ return -ENOENT;
+mmio:
+ return EMULATE_DO_MMIO;
+}
+EXPORT_SYMBOL_GPL(kvmppc_ld);
+
int kvm_arch_hardware_enable(void *garbage)
{
return 0;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 03/33] KVM: PPC: Remove kvmppc_bad_hva()
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
2014-06-22 21:23 ` [PATCH 01/33] KVM: PPC: Implement kvmppc_xlate for all targets Alexander Graf
2014-06-22 21:23 ` [PATCH 02/33] KVM: PPC: Move kvmppc_ld/st to common code Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 04/33] KVM: PPC: Propagate kvmppc_xlate errors properly Alexander Graf
` (30 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We have a proper define for invalid HVA numbers. Use those instead of the
ppc specific kvmppc_bad_hva().
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/powerpc.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 29e7380..e25ce60 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -284,11 +284,6 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvmppc_emulate_mmio);
-static hva_t kvmppc_bad_hva(void)
-{
- return PAGE_OFFSET;
-}
-
static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte,
bool read)
{
@@ -306,7 +301,7 @@ static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte,
return hpage | (pte->raddr & ~PAGE_MASK);
err:
- return kvmppc_bad_hva();
+ return KVM_HVA_ERR_BAD;
}
int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 04/33] KVM: PPC: Propagate kvmppc_xlate errors properly
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (2 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 03/33] KVM: PPC: Remove kvmppc_bad_hva() Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 05/33] KVM: PPC: Use kvm_read_guest in kvmppc_ld Alexander Graf
` (29 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
The kvmppc_xlate() helper will already error out with good error codes, such as
-ENOENT or -EPERM. Deflect them onto our callers in kvmppc_ld/st so they know
what's going on.
While at it, remove the superfluous check on pte->may_write. The xlate function
already checks this.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/powerpc.c | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index e25ce60..c5dbc66 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -308,18 +308,17 @@ int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
bool data)
{
struct kvmppc_pte pte;
+ int r;
vcpu->stat.st++;
- if (kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
- XLATE_WRITE, &pte))
- return -ENOENT;
+ r = kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
+ XLATE_WRITE, &pte);
+ if (r)
+ return r;
*eaddr = pte.raddr;
- if (!pte.may_write)
- return -EPERM;
-
if (kvm_write_guest(vcpu->kvm, pte.raddr, ptr, size))
return EMULATE_DO_MMIO;
@@ -332,12 +331,14 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
{
struct kvmppc_pte pte;
hva_t hva = *eaddr;
+ int r;
vcpu->stat.ld++;
- if (kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
- XLATE_READ, &pte))
- goto nopte;
+ r = kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST,
+ XLATE_READ, &pte);
+ if (r)
+ return r;
*eaddr = pte.raddr;
@@ -352,8 +353,6 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
return EMULATE_DONE;
-nopte:
- return -ENOENT;
mmio:
return EMULATE_DO_MMIO;
}
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 05/33] KVM: PPC: Use kvm_read_guest in kvmppc_ld
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (3 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 04/33] KVM: PPC: Propagate kvmppc_xlate errors properly Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 06/33] KVM: PPC: Handle magic page in kvmppc_ld/st Alexander Graf
` (28 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We have a nice and handy helper to read from guest physical address space,
so we should make use of it in kvmppc_ld as we already do for its counterpart
in kvmppc_st.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/powerpc.c | 34 ++--------------------------------
1 file changed, 2 insertions(+), 32 deletions(-)
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index c5dbc66..2a8eaa9 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -284,26 +284,6 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvmppc_emulate_mmio);
-static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte,
- bool read)
-{
- hva_t hpage;
-
- if (read && !pte->may_read)
- goto err;
-
- if (!read && !pte->may_write)
- goto err;
-
- hpage = gfn_to_hva(vcpu->kvm, pte->raddr >> PAGE_SHIFT);
- if (kvm_is_error_hva(hpage))
- goto err;
-
- return hpage | (pte->raddr & ~PAGE_MASK);
-err:
- return KVM_HVA_ERR_BAD;
-}
-
int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
bool data)
{
@@ -330,7 +310,6 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
bool data)
{
struct kvmppc_pte pte;
- hva_t hva = *eaddr;
int r;
vcpu->stat.ld++;
@@ -342,19 +321,10 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
*eaddr = pte.raddr;
- hva = kvmppc_pte_to_hva(vcpu, &pte, true);
- if (kvm_is_error_hva(hva))
- goto mmio;
-
- if (copy_from_user(ptr, (void __user *)hva, size)) {
- printk(KERN_INFO "kvmppc_ld at 0x%lx failed\n", hva);
- goto mmio;
- }
+ if (kvm_read_guest(vcpu->kvm, pte.raddr, ptr, size))
+ return EMULATE_DO_MMIO;
return EMULATE_DONE;
-
-mmio:
- return EMULATE_DO_MMIO;
}
EXPORT_SYMBOL_GPL(kvmppc_ld);
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 06/33] KVM: PPC: Handle magic page in kvmppc_ld/st
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (4 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 05/33] KVM: PPC: Use kvm_read_guest in kvmppc_ld Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 07/33] KVM: PPC: Separate loadstore emulation from priv emulation Alexander Graf
` (27 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We use kvmppc_ld and kvmppc_st to emulate load/store instructions that may as
well access the magic page. Special case it out so that we can properly access
it.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_book3s.h | 7 +++++++
arch/powerpc/include/asm/kvm_booke.h | 10 ++++++++++
arch/powerpc/kvm/powerpc.c | 22 ++++++++++++++++++++++
3 files changed, 39 insertions(+)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 47a9ca6..5d5ae7d 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -308,6 +308,13 @@ static inline bool is_kvmppc_resume_guest(int r)
return (r == RESUME_GUEST || r == RESUME_GUEST_NV);
}
+static inline bool is_kvmppc_hv_enabled(struct kvm *kvm);
+static inline bool kvmppc_supports_magic_page(struct kvm_vcpu *vcpu)
+{
+ /* Only PR KVM supports the magic page */
+ return !is_kvmppc_hv_enabled(vcpu->kvm);
+}
+
/* Magic register values loaded into r3 and r4 before the 'sc' assembly
* instruction for the OSI hypercalls */
#define OSI_SC_MAGIC_R3 0x113724FA
diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h
index c7aed61..ca25da8 100644
--- a/arch/powerpc/include/asm/kvm_booke.h
+++ b/arch/powerpc/include/asm/kvm_booke.h
@@ -108,4 +108,14 @@ static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault_dear;
}
+
+static inline bool kvmppc_supports_magic_page(struct kvm_vcpu *vcpu)
+{
+ /* Magic page is only supported on e500v2 */
+#ifdef CONFIG_KVM_E500V2
+ return true;
+#else
+ return false;
+#endif
+}
#endif /* __ASM_KVM_BOOKE_H__ */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 2a8eaa9..c16ae8b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -287,6 +287,7 @@ EXPORT_SYMBOL_GPL(kvmppc_emulate_mmio);
int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
bool data)
{
+ ulong mp_pa = vcpu->arch.magic_page_pa & KVM_PAM & PAGE_MASK;
struct kvmppc_pte pte;
int r;
@@ -299,6 +300,16 @@ int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
*eaddr = pte.raddr;
+ /* Magic page override */
+ if (kvmppc_supports_magic_page(vcpu) && mp_pa &&
+ ((pte.raddr & KVM_PAM & PAGE_MASK) == mp_pa) &&
+ !(kvmppc_get_msr(vcpu) & MSR_PR)) {
+ void *magic = vcpu->arch.shared;
+ magic += pte.eaddr & 0xfff;
+ memcpy(magic, ptr, size);
+ return EMULATE_DONE;
+ }
+
if (kvm_write_guest(vcpu->kvm, pte.raddr, ptr, size))
return EMULATE_DO_MMIO;
@@ -309,6 +320,7 @@ EXPORT_SYMBOL_GPL(kvmppc_st);
int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
bool data)
{
+ ulong mp_pa = vcpu->arch.magic_page_pa & KVM_PAM & PAGE_MASK;
struct kvmppc_pte pte;
int r;
@@ -321,6 +333,16 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
*eaddr = pte.raddr;
+ /* Magic page override */
+ if (kvmppc_supports_magic_page(vcpu) && mp_pa &&
+ ((pte.raddr & KVM_PAM & PAGE_MASK) == mp_pa) &&
+ !(kvmppc_get_msr(vcpu) & MSR_PR)) {
+ void *magic = vcpu->arch.shared;
+ magic += pte.eaddr & 0xfff;
+ memcpy(ptr, magic, size);
+ return EMULATE_DONE;
+ }
+
if (kvm_read_guest(vcpu->kvm, pte.raddr, ptr, size))
return EMULATE_DO_MMIO;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 07/33] KVM: PPC: Separate loadstore emulation from priv emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (5 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 06/33] KVM: PPC: Handle magic page in kvmppc_ld/st Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 08/33] KVM: PPC: Introduce emulation for unprivileged instructions Alexander Graf
` (26 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
Today the instruction emulator can get called via 2 separate code paths. It
can either be called by MMIO emulation detection code or by privileged
instruction traps.
This is bad, as both code paths prepare the environment differently. For MMIO
emulation we already know the virtual address we faulted on, so instructions
there don't have to actually fetch that information.
Split out the two separate use cases into separate files.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_ppc.h | 1 +
arch/powerpc/kvm/Makefile | 4 +-
arch/powerpc/kvm/emulate.c | 190 -------------------------
arch/powerpc/kvm/emulate_loadstore.c | 266 +++++++++++++++++++++++++++++++++++
arch/powerpc/kvm/powerpc.c | 2 +-
5 files changed, 271 insertions(+), 192 deletions(-)
create mode 100644 arch/powerpc/kvm/emulate_loadstore.c
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index ecc4588..ad8a1d7 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -78,6 +78,7 @@ extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
bool data);
extern int kvmppc_emulate_instruction(struct kvm_run *run,
struct kvm_vcpu *vcpu);
+extern int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu);
extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu);
extern void kvmppc_emulate_dec(struct kvm_vcpu *vcpu);
extern u32 kvmppc_get_dec(struct kvm_vcpu *vcpu, u64 tb);
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index ce569b6..7d44aeb 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -14,8 +14,9 @@ CFLAGS_44x_tlb.o := -I.
CFLAGS_e500_mmu.o := -I.
CFLAGS_e500_mmu_host.o := -I.
CFLAGS_emulate.o := -I.
+CFLAGS_emulate_loadstore.o := -I.
-common-objs-y += powerpc.o emulate.o
+common-objs-y += powerpc.o emulate.o emulate_loadstore.o
obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
obj-$(CONFIG_KVM_BOOK3S_HANDLER) += book3s_exports.o
@@ -102,6 +103,7 @@ kvm-book3s_64-module-objs += \
$(KVM)/eventfd.o \
powerpc.o \
emulate.o \
+ emulate_loadstore.o \
book3s.o \
book3s_64_vio.o \
book3s_rtas.o \
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index da86d9b..c92476e 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -207,25 +207,11 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
return emulated;
}
-/* XXX to do:
- * lhax
- * lhaux
- * lswx
- * lswi
- * stswx
- * stswi
- * lha
- * lhau
- * lmw
- * stmw
- *
- */
/* XXX Should probably auto-generate instruction decoding for a particular core
* from opcode tables in the future. */
int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
{
u32 inst = kvmppc_get_last_inst(vcpu);
- int ra = get_ra(inst);
int rs = get_rs(inst);
int rt = get_rt(inst);
int sprn = get_sprn(inst);
@@ -264,200 +250,24 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
#endif
advance = 0;
break;
- case OP_31_XOP_LWZX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
- break;
-
- case OP_31_XOP_LBZX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
- break;
-
- case OP_31_XOP_LBZUX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_STWX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 4, 1);
- break;
-
- case OP_31_XOP_STBX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 1, 1);
- break;
-
- case OP_31_XOP_STBUX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 1, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_31_XOP_LHAX:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
- break;
-
- case OP_31_XOP_LHZX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
- break;
-
- case OP_31_XOP_LHZUX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
case OP_31_XOP_MFSPR:
emulated = kvmppc_emulate_mfspr(vcpu, sprn, rt);
break;
- case OP_31_XOP_STHX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 2, 1);
- break;
-
- case OP_31_XOP_STHUX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
case OP_31_XOP_MTSPR:
emulated = kvmppc_emulate_mtspr(vcpu, sprn, rs);
break;
- case OP_31_XOP_DCBST:
- case OP_31_XOP_DCBF:
- case OP_31_XOP_DCBI:
- /* Do nothing. The guest is performing dcbi because
- * hardware DMA is not snooped by the dcache, but
- * emulated DMA either goes through the dcache as
- * normal writes, or the host kernel has handled dcache
- * coherence. */
- break;
-
- case OP_31_XOP_LWBRX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0);
- break;
-
case OP_31_XOP_TLBSYNC:
break;
- case OP_31_XOP_STWBRX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 4, 0);
- break;
-
- case OP_31_XOP_LHBRX:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0);
- break;
-
- case OP_31_XOP_STHBRX:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 2, 0);
- break;
-
default:
/* Attempt core-specific emulation below. */
emulated = EMULATE_FAIL;
}
break;
- case OP_LWZ:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
- break;
-
- /* TBD: Add support for other 64 bit load variants like ldu, ldux, ldx etc. */
- case OP_LD:
- rt = get_rt(inst);
- emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
- break;
-
- case OP_LWZU:
- emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_LBZ:
- emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
- break;
-
- case OP_LBZU:
- emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_STW:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 4, 1);
- break;
-
- /* TBD: Add support for other 64 bit store variants like stdu, stdux, stdx etc. */
- case OP_STD:
- rs = get_rs(inst);
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 8, 1);
- break;
-
- case OP_STWU:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 4, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_STB:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 1, 1);
- break;
-
- case OP_STBU:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 1, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_LHZ:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
- break;
-
- case OP_LHZU:
- emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_LHA:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
- break;
-
- case OP_LHAU:
- emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
- case OP_STH:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 2, 1);
- break;
-
- case OP_STHU:
- emulated = kvmppc_handle_store(run, vcpu,
- kvmppc_get_gpr(vcpu, rs),
- 2, 1);
- kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
- break;
-
default:
emulated = EMULATE_FAIL;
}
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
new file mode 100644
index 0000000..6f0feef
--- /dev/null
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -0,0 +1,266 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ * Copyright IBM Corp. 2007
+ * Copyright 2011 Freescale Semiconductor, Inc.
+ *
+ * Authors: Hollis Blanchard <hollisb@us.ibm.com>
+ */
+
+#include <linux/jiffies.h>
+#include <linux/hrtimer.h>
+#include <linux/types.h>
+#include <linux/string.h>
+#include <linux/kvm_host.h>
+#include <linux/clockchips.h>
+
+#include <asm/reg.h>
+#include <asm/time.h>
+#include <asm/byteorder.h>
+#include <asm/kvm_ppc.h>
+#include <asm/disassemble.h>
+#include <asm/ppc-opcode.h>
+#include "timing.h"
+#include "trace.h"
+
+/* XXX to do:
+ * lhax
+ * lhaux
+ * lswx
+ * lswi
+ * stswx
+ * stswi
+ * lha
+ * lhau
+ * lmw
+ * stmw
+ *
+ */
+int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
+{
+ u32 inst = kvmppc_get_last_inst(vcpu);
+ int ra = get_ra(inst);
+ int rs = get_rs(inst);
+ int rt = get_rt(inst);
+ enum emulation_result emulated = EMULATE_DONE;
+ struct kvm_run *run = vcpu->run;
+ int advance = 1;
+
+ /* this default type might be overwritten by subcategories */
+ kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
+
+ switch (get_op(inst)) {
+ case 31:
+ switch (get_xop(inst)) {
+ case OP_31_XOP_LWZX:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
+ break;
+
+ case OP_31_XOP_LBZX:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+ break;
+
+ case OP_31_XOP_LBZUX:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_31_XOP_STWX:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 4, 1);
+ break;
+
+ case OP_31_XOP_STBX:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 1, 1);
+ break;
+
+ case OP_31_XOP_STBUX:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 1, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_31_XOP_LHAX:
+ emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
+ break;
+
+ case OP_31_XOP_LHZX:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
+ break;
+
+ case OP_31_XOP_LHZUX:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_31_XOP_STHX:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 2, 1);
+ break;
+
+ case OP_31_XOP_STHUX:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 2, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_31_XOP_DCBST:
+ case OP_31_XOP_DCBF:
+ case OP_31_XOP_DCBI:
+ /* Do nothing. The guest is performing dcbi because
+ * hardware DMA is not snooped by the dcache, but
+ * emulated DMA either goes through the dcache as
+ * normal writes, or the host kernel has handled dcache
+ * coherence. */
+ break;
+
+ case OP_31_XOP_LWBRX:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0);
+ break;
+
+ case OP_31_XOP_STWBRX:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 4, 0);
+ break;
+
+ case OP_31_XOP_LHBRX:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0);
+ break;
+
+ case OP_31_XOP_STHBRX:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 2, 0);
+ break;
+
+ default:
+ emulated = EMULATE_FAIL;
+ break;
+ }
+ break;
+
+ case OP_LWZ:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
+ break;
+
+ /* TBD: Add support for other 64 bit load variants like ldu, ldux, ldx etc. */
+ case OP_LD:
+ rt = get_rt(inst);
+ emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
+ break;
+
+ case OP_LWZU:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_LBZ:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+ break;
+
+ case OP_LBZU:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_STW:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 4, 1);
+ break;
+
+ /* TBD: Add support for other 64 bit store variants like stdu, stdux, stdx etc. */
+ case OP_STD:
+ rs = get_rs(inst);
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 8, 1);
+ break;
+
+ case OP_STWU:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 4, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_STB:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 1, 1);
+ break;
+
+ case OP_STBU:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 1, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_LHZ:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
+ break;
+
+ case OP_LHZU:
+ emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_LHA:
+ emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
+ break;
+
+ case OP_LHAU:
+ emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ case OP_STH:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 2, 1);
+ break;
+
+ case OP_STHU:
+ emulated = kvmppc_handle_store(run, vcpu,
+ kvmppc_get_gpr(vcpu, rs),
+ 2, 1);
+ kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
+ break;
+
+ default:
+ emulated = EMULATE_FAIL;
+ break;
+ }
+
+ if (emulated == EMULATE_FAIL) {
+ advance = 0;
+ kvmppc_core_queue_program(vcpu, 0);
+ }
+
+ trace_kvm_ppc_instr(inst, kvmppc_get_pc(vcpu), emulated);
+
+ /* Advance past emulated instruction. */
+ if (advance)
+ kvmppc_set_pc(vcpu, kvmppc_get_pc(vcpu) + 4);
+
+ return emulated;
+}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index c16ae8b..5b1a8d6 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -254,7 +254,7 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
enum emulation_result er;
int r;
- er = kvmppc_emulate_instruction(run, vcpu);
+ er = kvmppc_emulate_loadstore(vcpu);
switch (er) {
case EMULATE_DONE:
/* Future optimization: only reload non-volatiles if they were
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 08/33] KVM: PPC: Introduce emulation for unprivileged instructions
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (6 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 07/33] KVM: PPC: Separate loadstore emulation from priv emulation Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 09/33] KVM: PPC: Move critical section detection to common code Alexander Graf
` (25 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We want to be able to emulate instructions during phases where we can't
run the guest in guest context, such as the "critical section" phase
in our magic page helpers.
Add an emulation helper for that case that allows us to emulate both
privileged and unprivileged instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_ppc.h | 1 +
arch/powerpc/kvm/emulate.c | 55 +++++++++++++++++++++++++++++++++++---
2 files changed, 52 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index ad8a1d7..b141eae 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -78,6 +78,7 @@ extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
bool data);
extern int kvmppc_emulate_instruction(struct kvm_run *run,
struct kvm_vcpu *vcpu);
+extern int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu);
extern int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu);
extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu);
extern void kvmppc_emulate_dec(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index c92476e..7bf247e0 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -209,14 +209,13 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
/* XXX Should probably auto-generate instruction decoding for a particular core
* from opcode tables in the future. */
-int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
+static int kvmppc_emulate_priv_instruction(struct kvm_vcpu *vcpu, int *advance)
{
u32 inst = kvmppc_get_last_inst(vcpu);
int rs = get_rs(inst);
int rt = get_rt(inst);
int sprn = get_sprn(inst);
enum emulation_result emulated = EMULATE_DONE;
- int advance = 1;
/* this default type might be overwritten by subcategories */
kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
@@ -232,7 +231,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
kvmppc_core_queue_program(vcpu,
vcpu->arch.shared->esr | ESR_PTR);
#endif
- advance = 0;
+ *advance = 0;
break;
case 31:
@@ -248,7 +247,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
kvmppc_core_queue_program(vcpu,
vcpu->arch.shared->esr | ESR_PTR);
#endif
- advance = 0;
+ *advance = 0;
break;
case OP_31_XOP_MFSPR:
@@ -272,6 +271,17 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
emulated = EMULATE_FAIL;
}
+ return emulated;
+}
+
+/* Emulates privileged instructions only */
+int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
+{
+ u32 inst = kvmppc_get_last_inst(vcpu);
+ enum emulation_result emulated;
+ int advance = 1;
+
+ emulated = kvmppc_emulate_priv_instruction(vcpu, &advance);
if (emulated == EMULATE_FAIL) {
emulated = vcpu->kvm->arch.kvm_ops->emulate_op(run, vcpu, inst,
&advance);
@@ -294,3 +304,40 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
return emulated;
}
EXPORT_SYMBOL_GPL(kvmppc_emulate_instruction);
+
+/* Emulates privileged and non-privileged instructions */
+int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
+{
+ u32 inst = kvmppc_get_last_inst(vcpu);
+ enum emulation_result emulated = EMULATE_DONE;
+ int advance = 1;
+
+ kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
+
+ /* Try non-privileged instructions */
+ switch (get_op(inst)) {
+ default:
+ emulated = EMULATE_FAIL;
+ }
+
+ /* Try privileged instructions */
+ if (emulated == EMULATE_FAIL)
+ emulated = kvmppc_emulate_priv_instruction(vcpu, &advance);
+
+ if (emulated == EMULATE_AGAIN) {
+ advance = 0;
+ } else if (emulated == EMULATE_FAIL) {
+ advance = 0;
+ printk(KERN_ERR "Couldn't emulate instruction 0x%08x "
+ "(op %d xop %d)\n", inst, get_op(inst), get_xop(inst));
+ kvmppc_core_queue_program(vcpu, 0);
+ }
+
+ trace_kvm_ppc_instr(inst, kvmppc_get_pc(vcpu), emulated);
+
+ /* Advance past emulated instruction. */
+ if (advance)
+ kvmppc_set_pc(vcpu, kvmppc_get_pc(vcpu) + 4);
+
+ return emulated;
+}
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 09/33] KVM: PPC: Move critical section detection to common code
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (7 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 08/33] KVM: PPC: Introduce emulation for unprivileged instructions Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 10/33] KVM: PPC: Make critical section detection conditional Alexander Graf
` (24 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We can have critical sections on booke as well as book3s. Move the detection
code whether the guest is in a critical section to generic code.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_ppc.h | 1 +
arch/powerpc/kvm/book3s.c | 26 --------------------------
arch/powerpc/kvm/powerpc.c | 26 ++++++++++++++++++++++++++
3 files changed, 27 insertions(+), 26 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b141eae..6f7a4e5 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -117,6 +117,7 @@ extern int kvmppc_core_vcpu_translate(struct kvm_vcpu *vcpu,
extern void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
extern void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu);
+extern bool kvmppc_critical_section(struct kvm_vcpu *vcpu);
extern int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu);
extern int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_queue_program(struct kvm_vcpu *vcpu, ulong flags);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 9f2a5ec..d66d31e06 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -90,32 +90,6 @@ static inline void kvmppc_update_int_pending(struct kvm_vcpu *vcpu,
kvmppc_set_int_pending(vcpu, 0);
}
-static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
-{
- ulong crit_raw;
- ulong crit_r1;
- bool crit;
-
- if (is_kvmppc_hv_enabled(vcpu->kvm))
- return false;
-
- crit_raw = kvmppc_get_critical(vcpu);
- crit_r1 = kvmppc_get_gpr(vcpu, 1);
-
- /* Truncate crit indicators in 32 bit mode */
- if (!(kvmppc_get_msr(vcpu) & MSR_SF)) {
- crit_raw &= 0xffffffff;
- crit_r1 &= 0xffffffff;
- }
-
- /* Critical section when crit == r1 */
- crit = (crit_raw == crit_r1);
- /* ... and we're in supervisor mode */
- crit = crit && !(kvmppc_get_msr(vcpu) & MSR_PR);
-
- return crit;
-}
-
void kvmppc_inject_interrupt(struct kvm_vcpu *vcpu, int vec, u64 flags)
{
kvmppc_set_srr0(vcpu, kvmppc_get_pc(vcpu));
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 5b1a8d6..2269799 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -57,6 +57,32 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
return 1;
}
+bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
+{
+ ulong crit_raw;
+ ulong crit_r1;
+ bool crit;
+
+ if (is_kvmppc_hv_enabled(vcpu->kvm))
+ return false;
+
+ crit_raw = kvmppc_get_critical(vcpu);
+ crit_r1 = kvmppc_get_gpr(vcpu, 1);
+
+ /* Truncate crit indicators in 32 bit mode */
+ if (!(kvmppc_get_msr(vcpu) & MSR_SF)) {
+ crit_raw &= 0xffffffff;
+ crit_r1 &= 0xffffffff;
+ }
+
+ /* Critical section when crit == r1 */
+ crit = (crit_raw == crit_r1);
+ /* ... and we're in supervisor mode */
+ crit = crit && !(kvmppc_get_msr(vcpu) & MSR_PR);
+
+ return crit;
+}
+
/*
* Common checks before entering the guest world. Call with interrupts
* disabled.
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 10/33] KVM: PPC: Make critical section detection conditional
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (8 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 09/33] KVM: PPC: Move critical section detection to common code Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 11/33] KVM: PPC: BookE: Use common critical section helper Alexander Graf
` (23 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We only ever support critical sections when we also support the magic page.
Without magic page support the guest can't ever get into a critical section
condition.
With this condition we cover both the book3s HV/PR case as well as the BookE
HV/PR one.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/powerpc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 2269799..230b2bc 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -63,7 +63,7 @@ bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
ulong crit_r1;
bool crit;
- if (is_kvmppc_hv_enabled(vcpu->kvm))
+ if (!kvmppc_supports_magic_page(vcpu))
return false;
crit_raw = kvmppc_get_critical(vcpu);
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 11/33] KVM: PPC: BookE: Use common critical section helper
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (9 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 10/33] KVM: PPC: Make critical section detection conditional Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 12/33] KVM: PPC: Emulate critical sections when we hit them Alexander Graf
` (22 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We now have common code to detect whether we are inside a critical section.
Make use of it on BookE.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/booke.c | 15 +--------------
1 file changed, 1 insertion(+), 14 deletions(-)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 58d9b6c..eb00d5d 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -349,24 +349,11 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
int allowed = 0;
ulong msr_mask = 0;
bool update_esr = false, update_dear = false, update_epr = false;
- ulong crit_raw = vcpu->arch.shared->critical;
- ulong crit_r1 = kvmppc_get_gpr(vcpu, 1);
- bool crit;
+ bool crit = kvmppc_critical_section(vcpu);
bool keep_irq = false;
enum int_class int_class;
ulong new_msr = vcpu->arch.shared->msr;
- /* Truncate crit indicators in 32 bit mode */
- if (!(vcpu->arch.shared->msr & MSR_SF)) {
- crit_raw &= 0xffffffff;
- crit_r1 &= 0xffffffff;
- }
-
- /* Critical section when crit == r1 */
- crit = (crit_raw == crit_r1);
- /* ... and we're in supervisor mode */
- crit = crit && !(vcpu->arch.shared->msr & MSR_PR);
-
if (priority == BOOKE_IRQPRIO_EXTERNAL_LEVEL) {
priority = BOOKE_IRQPRIO_EXTERNAL;
keep_irq = true;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 12/33] KVM: PPC: Emulate critical sections when we hit them
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (10 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 11/33] KVM: PPC: BookE: Use common critical section helper Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 13/33] KVM: PPC: Expose helper functions for data/inst faults Alexander Graf
` (21 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
Usually the idea behind critical sections is that we don't ever trap in
them. However, we may not be that lucky always.
When we do hit a critical section while an interrupt is pending, we need
to make sure we can inject the interrupt right when the critical section
is over.
To achieve this, we just emulate every single instruction inside the
critical section until we're out of it.
Note: For new we don't trigger this code path until we properly emulate
all the instructions necessary to run Linux guests well again.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_ppc.h | 1 +
arch/powerpc/kvm/book3s.c | 15 +++++++++++++++
arch/powerpc/kvm/booke.c | 35 +++++++++++++++++++++++++++++++++++
arch/powerpc/kvm/powerpc.c | 23 +++++++++++++++++++++++
4 files changed, 74 insertions(+)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 6f7a4e5..ef97e0b 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -117,6 +117,7 @@ extern int kvmppc_core_vcpu_translate(struct kvm_vcpu *vcpu,
extern void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
extern void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu);
+extern bool kvmppc_crit_inhibited_irq_pending(struct kvm_vcpu *vcpu);
extern bool kvmppc_critical_section(struct kvm_vcpu *vcpu);
extern int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu);
extern int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index d66d31e06..ab54976 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -192,6 +192,21 @@ void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu)
kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL_LEVEL);
}
+bool kvmppc_crit_inhibited_irq_pending(struct kvm_vcpu *vcpu)
+{
+ unsigned long *p = &vcpu->arch.pending_exceptions;
+
+ if (!(kvmppc_get_msr(vcpu) & MSR_EE))
+ return false;
+
+ if (test_bit(BOOK3S_IRQPRIO_DECREMENTER, p) ||
+ test_bit(BOOK3S_IRQPRIO_EXTERNAL, p) ||
+ test_bit(BOOK3S_IRQPRIO_EXTERNAL_LEVEL, p))
+ return true;
+
+ return false;
+}
+
int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
{
int deliver = 1;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index eb00d5d..cbe9832 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -342,6 +342,41 @@ static unsigned long get_guest_epr(struct kvm_vcpu *vcpu)
#endif
}
+bool kvmppc_crit_inhibited_irq_pending(struct kvm_vcpu *vcpu)
+{
+ unsigned long *p = &vcpu->arch.pending_exceptions;
+ bool ee = !!(kvmppc_get_msr(vcpu) & MSR_EE);
+ bool ce = !!(kvmppc_get_msr(vcpu) & MSR_CE);
+ bool me = !!(kvmppc_get_msr(vcpu) & MSR_ME);
+ bool de = !!(kvmppc_get_msr(vcpu) & MSR_DE);
+
+ if (ee) {
+ if (test_bit(BOOKE_IRQPRIO_EXTERNAL, p) ||
+ test_bit(BOOKE_IRQPRIO_DBELL, p))
+ return true;
+ }
+
+ if (ce) {
+ if (test_bit(BOOKE_IRQPRIO_WATCHDOG, p) ||
+ test_bit(BOOKE_IRQPRIO_CRITICAL, p) ||
+ test_bit(BOOKE_IRQPRIO_DBELL_CRIT, p))
+ return true;
+ }
+
+ if (me) {
+ if (test_bit(BOOKE_IRQPRIO_MACHINE_CHECK, p))
+ return true;
+ }
+
+ if (de) {
+ if (test_bit(BOOKE_IRQPRIO_DEBUG, p))
+ return true;
+ }
+
+ return false;
+}
+
+
/* Deliver the interrupt of the corresponding priority, if possible. */
static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
unsigned int priority)
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 230b2bc..f2de6f4 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -83,6 +83,20 @@ bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
return crit;
}
+static bool kvmppc_needs_emulation(struct kvm_vcpu *vcpu)
+{
+ /* XXX disable emulation for now, until we implemented everything */
+ if (true)
+ return false;
+
+ /* We're in a critical section, but an interrupt is pending */
+ if (kvmppc_critical_section(vcpu) &&
+ kvmppc_crit_inhibited_irq_pending(vcpu))
+ return true;
+
+ return false;
+}
+
/*
* Common checks before entering the guest world. Call with interrupts
* disabled.
@@ -141,6 +155,15 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
continue;
}
+ if (kvmppc_needs_emulation(vcpu)) {
+ /* Emulate one instruction, then try again */
+ local_irq_enable();
+ vcpu->arch.last_inst = KVM_INST_FETCH_FAILED;
+ kvmppc_emulate_any_instruction(vcpu);
+ hard_irq_disable();
+ continue;
+ }
+
kvm_guest_enter();
return 1;
}
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 13/33] KVM: PPC: Expose helper functions for data/inst faults
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (11 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 12/33] KVM: PPC: Emulate critical sections when we hit them Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 14/33] KVM: PPC: Add std instruction emulation Alexander Graf
` (20 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
We're going to implement guest code interpretation in KVM for some rare
corner cases. This code needs to be able to inject data and instruction
faults into the guest when it encounters them.
Expose generic APIs to do this in a reasonably subarch agnostic fashion.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/kvm_ppc.h | 8 ++++++++
arch/powerpc/kvm/book3s.c | 17 +++++++++++++++++
arch/powerpc/kvm/booke.c | 16 ++++++++++------
3 files changed, 35 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index ef97e0b..986a96e 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -127,6 +127,14 @@ extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq);
extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu);
+extern void kvmppc_core_queue_dtlb_miss(struct kvm_vcpu *vcpu, ulong dear_flags,
+ ulong esr_flags);
+extern void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu,
+ ulong dear_flags,
+ ulong esr_flags);
+extern void kvmppc_core_queue_itlb_miss(struct kvm_vcpu *vcpu);
+extern void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu,
+ ulong esr_flags);
extern void kvmppc_core_flush_tlb(struct kvm_vcpu *vcpu);
extern int kvmppc_core_check_requests(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index ab54976..6fe3074 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -192,6 +192,23 @@ void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu)
kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL_LEVEL);
}
+void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu, ulong dar,
+ ulong flags)
+{
+ kvmppc_set_dar(vcpu, dar);
+ kvmppc_set_dsisr(vcpu, flags);
+ kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_DATA_STORAGE);
+}
+
+void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, ulong flags)
+{
+ u64 msr = kvmppc_get_msr(vcpu);
+ msr &= ~(SRR1_ISI_NOPT | SRR1_ISI_N_OR_G | SRR1_ISI_PROT);
+ msr |= flags & (SRR1_ISI_NOPT | SRR1_ISI_N_OR_G | SRR1_ISI_PROT);
+ kvmppc_set_msr_fast(vcpu, msr);
+ kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_INST_STORAGE);
+}
+
bool kvmppc_crit_inhibited_irq_pending(struct kvm_vcpu *vcpu)
{
unsigned long *p = &vcpu->arch.pending_exceptions;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index cbe9832..c0a71ce 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -185,24 +185,28 @@ static void kvmppc_booke_queue_irqprio(struct kvm_vcpu *vcpu,
set_bit(priority, &vcpu->arch.pending_exceptions);
}
-static void kvmppc_core_queue_dtlb_miss(struct kvm_vcpu *vcpu,
- ulong dear_flags, ulong esr_flags)
+void kvmppc_core_queue_dtlb_miss(struct kvm_vcpu *vcpu,
+ ulong dear_flags, ulong esr_flags)
{
vcpu->arch.queued_dear = dear_flags;
vcpu->arch.queued_esr = esr_flags;
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DTLB_MISS);
}
-static void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu,
- ulong dear_flags, ulong esr_flags)
+void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu,
+ ulong dear_flags, ulong esr_flags)
{
vcpu->arch.queued_dear = dear_flags;
vcpu->arch.queued_esr = esr_flags;
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DATA_STORAGE);
}
-static void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu,
- ulong esr_flags)
+void kvmppc_core_queue_itlb_miss(struct kvm_vcpu *vcpu)
+{
+ kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_ITLB_MISS);
+}
+
+void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, ulong esr_flags)
{
vcpu->arch.queued_esr = esr_flags;
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_INST_STORAGE);
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 14/33] KVM: PPC: Add std instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (12 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 13/33] KVM: PPC: Expose helper functions for data/inst faults Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 15/33] KVM: PPC: Add stw " Alexander Graf
` (19 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch implements full emulation for the "std" instruction. It also
implements all the the groundwork required to emulate store instructions
in general as well as MMIO exits in emulated sections.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/emulate.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++
arch/powerpc/kvm/powerpc.c | 6 +++-
2 files changed, 87 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 7bf247e0..0a5355d 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -305,10 +305,86 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvmppc_emulate_instruction);
+static ulong get_addr(struct kvm_vcpu *vcpu, int offset, int ra)
+{
+ ulong addr = 0;
+#if defined(CONFIG_PPC_BOOK3E_64)
+ ulong msr_64bit = MSR_CM;
+#elif defined(CONFIG_PPC_BOOK3S_64)
+ ulong msr_64bit = MSR_SF;
+#else
+ ulong msr_64bit = 0;
+#endif
+
+ if (ra)
+ addr = kvmppc_get_gpr(vcpu, ra);
+
+ addr += offset;
+ if (!(kvmppc_get_msr(vcpu) & msr_64bit))
+ addr = (uint32_t)addr;
+
+ return addr;
+}
+
+static int kvmppc_emulate_store(struct kvm_vcpu *vcpu, ulong addr, u64 value,
+ int size)
+{
+ ulong paddr = addr;
+ int r;
+
+ if (kvmppc_need_byteswap(vcpu)) {
+ switch (size) {
+ case 1: *(u8*)&value = value; break;
+ case 2: *(u16*)&value = swab16(value); break;
+ case 4: *(u32*)&value = swab32(value); break;
+ case 8: *(u64*)&value = swab64(value); break;
+ }
+ } else {
+ switch (size) {
+ case 1: *(u8*)&value = value; break;
+ case 2: *(u16*)&value = value; break;
+ case 4: *(u32*)&value = value; break;
+ case 8: *(u64*)&value = value; break;
+ }
+ }
+
+ r = kvmppc_st(vcpu, &paddr, size, &value, true);
+ switch (r) {
+ case -ENOENT:
+#ifdef CONFIG_PPC_BOOK3S
+ kvmppc_core_queue_data_storage(vcpu, addr,
+ DSISR_ISSTORE | DSISR_NOHPTE);
+#else
+ kvmppc_core_queue_dtlb_miss(vcpu, addr, ESR_DST | ESR_ST);
+#endif
+ r = EMULATE_AGAIN;
+ break;
+ case -EPERM:
+#ifdef CONFIG_PPC_BOOK3S
+ kvmppc_core_queue_data_storage(vcpu, addr,
+ DSISR_ISSTORE | DSISR_PROTFAULT);
+#else
+ kvmppc_core_queue_data_storage(vcpu, addr, ESR_ST);
+#endif
+ r = EMULATE_AGAIN;
+ break;
+ case EMULATE_DO_MMIO:
+ vcpu->stat.mmio_exits++;
+ vcpu->arch.paddr_accessed = paddr;
+ vcpu->arch.vaddr_accessed = addr;
+ vcpu->run->exit_reason = KVM_EXIT_MMIO;
+ r = kvmppc_emulate_loadstore(vcpu);
+ break;
+ }
+
+ return r;
+}
+
/* Emulates privileged and non-privileged instructions */
int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
{
u32 inst = kvmppc_get_last_inst(vcpu);
+ ulong addr, value;
enum emulation_result emulated = EMULATE_DONE;
int advance = 1;
@@ -316,8 +392,14 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
/* Try non-privileged instructions */
switch (get_op(inst)) {
+ case OP_STD:
+ addr = get_addr(vcpu, (s16)get_d(inst), get_ra(inst));
+ value = kvmppc_get_gpr(vcpu, get_rs(inst));
+ emulated = kvmppc_emulate_store(vcpu, addr, value, 8);
+ break;
default:
emulated = EMULATE_FAIL;
+ break;
}
/* Try privileged instructions */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index f2de6f4..0a326e1 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -158,8 +158,12 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
if (kvmppc_needs_emulation(vcpu)) {
/* Emulate one instruction, then try again */
local_irq_enable();
+
vcpu->arch.last_inst = KVM_INST_FETCH_FAILED;
- kvmppc_emulate_any_instruction(vcpu);
+ r = kvmppc_emulate_any_instruction(vcpu);
+ if (r == EMULATE_DO_MMIO)
+ return 0;
+
hard_irq_disable();
continue;
}
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 15/33] KVM: PPC: Add stw instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (13 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 14/33] KVM: PPC: Add std instruction emulation Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 16/33] KVM: PPC: Add ld " Alexander Graf
` (18 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds full emulation support for the stw instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/emulate.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 0a5355d..6658dea 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -397,6 +397,11 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
value = kvmppc_get_gpr(vcpu, get_rs(inst));
emulated = kvmppc_emulate_store(vcpu, addr, value, 8);
break;
+ case OP_STW:
+ addr = get_addr(vcpu, (s16)get_d(inst), get_ra(inst));
+ value = kvmppc_get_gpr(vcpu, get_rs(inst));
+ emulated = kvmppc_emulate_store(vcpu, addr, value, 4);
+ break;
default:
emulated = EMULATE_FAIL;
break;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 16/33] KVM: PPC: Add ld instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (14 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 15/33] KVM: PPC: Add stw " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 17/33] KVM: PPC: Add lwz " Alexander Graf
` (17 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds full emulation support for the ld instruction. It also
introduces a generic framework to handle guest load instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/emulate.c | 68 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 67 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 6658dea..9f89a41 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -380,11 +380,66 @@ static int kvmppc_emulate_store(struct kvm_vcpu *vcpu, ulong addr, u64 value,
return r;
}
+static int kvmppc_emulate_load(struct kvm_vcpu *vcpu, ulong addr, u64 *value,
+ int size)
+{
+ ulong paddr = addr;
+ int r;
+
+ r = kvmppc_ld(vcpu, &paddr, size, value, true);
+
+ switch (r) {
+ case EMULATE_DONE:
+ switch (size) {
+ case 1: *value = *(u8*)value; break;
+ case 2: *value = *(u16*)value; break;
+ case 4: *value = *(u32*)value; break;
+ case 8: break;
+ }
+
+ if (kvmppc_need_byteswap(vcpu)) {
+ switch (size) {
+ case 1: break;
+ case 2: *value = swab16(*value); break;
+ case 4: *value = swab32(*value); break;
+ case 8: *value = swab64(*value); break;
+ }
+ }
+ break;
+ case -ENOENT:
+#ifdef CONFIG_PPC_BOOK3S
+ kvmppc_core_queue_data_storage(vcpu, addr, DSISR_NOHPTE);
+#else
+ kvmppc_core_queue_dtlb_miss(vcpu, addr, ESR_DST);
+#endif
+ r = EMULATE_AGAIN;
+ break;
+ case -EPERM:
+#ifdef CONFIG_PPC_BOOK3S
+ kvmppc_core_queue_data_storage(vcpu, addr, DSISR_PROTFAULT);
+#else
+ kvmppc_core_queue_data_storage(vcpu, addr, 0);
+#endif
+ r = EMULATE_AGAIN;
+ break;
+ case EMULATE_DO_MMIO:
+ vcpu->stat.mmio_exits++;
+ vcpu->arch.paddr_accessed = paddr;
+ vcpu->arch.vaddr_accessed = addr;
+ vcpu->run->exit_reason = KVM_EXIT_MMIO;
+ r = kvmppc_emulate_loadstore(vcpu);
+ break;
+ }
+
+ return r;
+}
+
/* Emulates privileged and non-privileged instructions */
int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
{
u32 inst = kvmppc_get_last_inst(vcpu);
- ulong addr, value;
+ ulong addr;
+ u64 value;
enum emulation_result emulated = EMULATE_DONE;
int advance = 1;
@@ -402,6 +457,17 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
value = kvmppc_get_gpr(vcpu, get_rs(inst));
emulated = kvmppc_emulate_store(vcpu, addr, value, 4);
break;
+ case OP_LD:
+ addr = get_addr(vcpu, (s16)get_d(inst), get_ra(inst));
+ if (addr & 0x3) {
+ /* other instructions */
+ emulated = EMULATE_FAIL;
+ break;
+ }
+ emulated = kvmppc_emulate_load(vcpu, addr, &value, 8);
+ if (emulated == EMULATE_DONE)
+ kvmppc_set_gpr(vcpu, get_rt(inst), value);
+ break;
default:
emulated = EMULATE_FAIL;
break;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 17/33] KVM: PPC: Add lwz instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (15 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 16/33] KVM: PPC: Add ld " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 18/33] KVM: PPC: Add mfcr " Alexander Graf
` (16 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds full emulation support for the lwz instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/emulate.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 9f89a41..e688d85 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -468,6 +468,11 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
if (emulated == EMULATE_DONE)
kvmppc_set_gpr(vcpu, get_rt(inst), value);
break;
+ case OP_LWZ:
+ addr = get_addr(vcpu, (s16)get_d(inst), get_ra(inst));
+ emulated = kvmppc_emulate_load(vcpu, addr, &value, 4);
+ kvmppc_set_gpr(vcpu, get_rt(inst), value);
+ break;
default:
emulated = EMULATE_FAIL;
break;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 18/33] KVM: PPC: Add mfcr instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (16 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 17/33] KVM: PPC: Add lwz " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 19/33] KVM: PPC: Add addis " Alexander Graf
` (15 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds full emulation support for the mfcr instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 10 ++++++++++
2 files changed, 11 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 3132bb9..ce135be 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -86,6 +86,7 @@
#define OP_TRAP_64 2
#define OP_31_XOP_TRAP 4
+#define OP_31_XOP_MFCR 19
#define OP_31_XOP_LWZX 23
#define OP_31_XOP_DCBST 54
#define OP_31_XOP_LWZUX 55
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index e688d85..33a34c3 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -473,6 +473,16 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
emulated = kvmppc_emulate_load(vcpu, addr, &value, 4);
kvmppc_set_gpr(vcpu, get_rt(inst), value);
break;
+ case 31:
+ switch (get_xop(inst)) {
+ case OP_31_XOP_MFCR:
+ kvmppc_set_gpr(vcpu, get_rt(inst), kvmppc_get_cr(vcpu));
+ break;
+ default:
+ emulated = EMULATE_FAIL;
+ break;
+ }
+ break;
default:
emulated = EMULATE_FAIL;
break;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 19/33] KVM: PPC: Add addis instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (17 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 18/33] KVM: PPC: Add mfcr " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 20/33] KVM: PPC: Add ori " Alexander Graf
` (14 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch implements emulation for the addis instruction which is also used
to implement lis.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 7 +++++++
2 files changed, 8 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index ce135be..2302a8e 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -112,6 +112,7 @@
#define OP_31_XOP_LHBRX 790
#define OP_31_XOP_STHBRX 918
+#define OP_ADDIS 15
#define OP_LWZ 32
#define OP_LD 58
#define OP_LWZU 33
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 33a34c3..40f21bb 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -473,6 +473,13 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
emulated = kvmppc_emulate_load(vcpu, addr, &value, 4);
kvmppc_set_gpr(vcpu, get_rt(inst), value);
break;
+ case OP_ADDIS:
+ value = 0;
+ if (get_ra(inst))
+ value = kvmppc_get_gpr(vcpu, get_ra(inst));
+ value += ((s16)get_d(inst)) << 16;
+ kvmppc_set_gpr(vcpu, get_rt(inst), value);
+ break;
case 31:
switch (get_xop(inst)) {
case OP_31_XOP_MFCR:
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 20/33] KVM: PPC: Add ori instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (18 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 19/33] KVM: PPC: Add addis " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 21/33] KVM: PPC: Add and " Alexander Graf
` (13 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch implements emulation for the ori instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 2302a8e..a92c0e3 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -113,6 +113,7 @@
#define OP_31_XOP_STHBRX 918
#define OP_ADDIS 15
+#define OP_ORI 24
#define OP_LWZ 32
#define OP_LD 58
#define OP_LWZU 33
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 40f21bb..0437d3f 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -480,6 +480,11 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
value += ((s16)get_d(inst)) << 16;
kvmppc_set_gpr(vcpu, get_rt(inst), value);
break;
+ case OP_ORI:
+ value = kvmppc_get_gpr(vcpu, get_rs(inst));
+ value |= get_d(inst);
+ kvmppc_set_gpr(vcpu, get_ra(inst), value);
+ break;
case 31:
switch (get_xop(inst)) {
case OP_31_XOP_MFCR:
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 21/33] KVM: PPC: Add and instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (19 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 20/33] KVM: PPC: Add ori " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 22/33] KVM: PPC: Add andi. " Alexander Graf
` (12 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation support for the and instruction in its normal
as well as the Rc extended form.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 48 +++++++++++++++++++++++++++++++++++
2 files changed, 49 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index a92c0e3..d3ff899 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -88,6 +88,7 @@
#define OP_31_XOP_TRAP 4
#define OP_31_XOP_MFCR 19
#define OP_31_XOP_LWZX 23
+#define OP_31_XOP_AND 28
#define OP_31_XOP_DCBST 54
#define OP_31_XOP_LWZUX 55
#define OP_31_XOP_TRAP_64 68
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 0437d3f..cfe0bf6 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -434,12 +434,52 @@ static int kvmppc_emulate_load(struct kvm_vcpu *vcpu, ulong addr, u64 *value,
return r;
}
+static int kvmppc_emulate_cmp(struct kvm_vcpu *vcpu, u64 value0, u64 value1,
+ bool cmp_signed, int crf, bool is_32bit)
+{
+ bool lt, gt, eq;
+ u32 cr = 0;
+ u32 cr_mask;
+
+ if (cmp_signed) {
+ s64 signed0 = value0;
+ s64 signed1 = value1;
+
+ if (is_32bit) {
+ signed0 = (s64)(s32)signed0;
+ signed1 = (s64)(s32)signed1;
+ }
+ lt = signed0 < signed1;
+ gt = signed0 > signed1;
+ eq = signed0 == signed1;
+ } else {
+ if (is_32bit) {
+ value0 = (u32)value0;
+ value1 = (u32)value1;
+ }
+ lt = value0 < value1;
+ gt = value0 > value1;
+ eq = value0 == value1;
+ }
+
+ if (lt) cr |= 0x8;
+ if (gt) cr |= 0x4;
+ if (eq) cr |= 0x2;
+ cr <<= ((7 - crf) * 4);
+ cr_mask = 0xf << ((7 - crf) * 4);
+ cr |= kvmppc_get_cr(vcpu) & ~cr_mask;
+ kvmppc_set_cr(vcpu, cr);
+
+ return EMULATE_DONE;
+}
+
/* Emulates privileged and non-privileged instructions */
int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
{
u32 inst = kvmppc_get_last_inst(vcpu);
ulong addr;
u64 value;
+ bool is_32bit = !(kvmppc_get_msr(vcpu) & MSR_SF);
enum emulation_result emulated = EMULATE_DONE;
int advance = 1;
@@ -490,6 +530,14 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
case OP_31_XOP_MFCR:
kvmppc_set_gpr(vcpu, get_rt(inst), kvmppc_get_cr(vcpu));
break;
+ case OP_31_XOP_AND:
+ value = kvmppc_get_gpr(vcpu, get_rs(inst));
+ value &= kvmppc_get_gpr(vcpu, get_rb(inst));
+ kvmppc_set_gpr(vcpu, get_ra(inst), value);
+ if (get_rc(inst))
+ kvmppc_emulate_cmp(vcpu, value, 0, true, 0,
+ is_32bit);
+ break;
default:
emulated = EMULATE_FAIL;
break;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 22/33] KVM: PPC: Add andi. instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (20 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 21/33] KVM: PPC: Add and " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 23/33] KVM: PPC: Add or " Alexander Graf
` (11 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation support for the andi. instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 6 ++++++
2 files changed, 7 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index d3ff899..35296d0 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -115,6 +115,7 @@
#define OP_ADDIS 15
#define OP_ORI 24
+#define OP_ANDI 28
#define OP_LWZ 32
#define OP_LD 58
#define OP_LWZU 33
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index cfe0bf6..e156d31 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -525,6 +525,12 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
value |= get_d(inst);
kvmppc_set_gpr(vcpu, get_ra(inst), value);
break;
+ case OP_ANDI:
+ value = kvmppc_get_gpr(vcpu, get_rs(inst));
+ value &= get_d(inst);
+ kvmppc_set_gpr(vcpu, get_ra(inst), value);
+ kvmppc_emulate_cmp(vcpu, value, 0, true, 0, is_32bit);
+ break;
case 31:
switch (get_xop(inst)) {
case OP_31_XOP_MFCR:
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 23/33] KVM: PPC: Add or instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (21 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 22/33] KVM: PPC: Add andi. " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 24/33] KVM: PPC: Add cmpwi/cmpdi " Alexander Graf
` (10 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation support for the or and or. instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 8 ++++++++
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 35296d0..8a155dd 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -105,6 +105,7 @@
#define OP_31_XOP_LHAUX 375
#define OP_31_XOP_STHX 407
#define OP_31_XOP_STHUX 439
+#define OP_31_XOP_OR 444
#define OP_31_XOP_MTSPR 467
#define OP_31_XOP_DCBI 470
#define OP_31_XOP_LWBRX 534
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index e156d31..bde4c1e 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -544,6 +544,14 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
kvmppc_emulate_cmp(vcpu, value, 0, true, 0,
is_32bit);
break;
+ case OP_31_XOP_OR:
+ value = kvmppc_get_gpr(vcpu, get_rs(inst));
+ value |= kvmppc_get_gpr(vcpu, get_rb(inst));
+ kvmppc_set_gpr(vcpu, get_ra(inst), value);
+ if (get_rc(inst))
+ kvmppc_emulate_cmp(vcpu, value, 0, true, 0,
+ is_32bit);
+ break;
default:
emulated = EMULATE_FAIL;
break;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 24/33] KVM: PPC: Add cmpwi/cmpdi instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (22 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 23/33] KVM: PPC: Add or " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 25/33] KVM: PPC: Add bc " Alexander Graf
` (9 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation for the cmpwi and cmpdi instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 8a155dd..5160af9 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -114,6 +114,7 @@
#define OP_31_XOP_LHBRX 790
#define OP_31_XOP_STHBRX 918
+#define OP_CMPI 11
#define OP_ADDIS 15
#define OP_ORI 24
#define OP_ANDI 28
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index bde4c1e..7b8acb0 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -531,6 +531,11 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
kvmppc_set_gpr(vcpu, get_ra(inst), value);
kvmppc_emulate_cmp(vcpu, value, 0, true, 0, is_32bit);
break;
+ case OP_CMPI:
+ value = kvmppc_get_gpr(vcpu, get_ra(inst));
+ kvmppc_emulate_cmp(vcpu, value, (s16)get_d(inst), true,
+ get_rt(inst) >> 2, !(get_rt(inst) & 1));
+ break;
case 31:
switch (get_xop(inst)) {
case OP_31_XOP_MFCR:
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 25/33] KVM: PPC: Add bc instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (23 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 24/33] KVM: PPC: Add cmpwi/cmpdi " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 26/33] KVM: PPC: Add mtcrf " Alexander Graf
` (8 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation support for conditional branches.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 51 +++++++++++++++++++++++++++++++++++
2 files changed, 52 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 5160af9..e130156 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -116,6 +116,7 @@
#define OP_CMPI 11
#define OP_ADDIS 15
+#define OP_BC 16
#define OP_ORI 24
#define OP_ANDI 28
#define OP_LWZ 32
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 7b8acb0..aeadc30 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -473,6 +473,54 @@ static int kvmppc_emulate_cmp(struct kvm_vcpu *vcpu, u64 value0, u64 value1,
return EMULATE_DONE;
}
+int kvmppc_emulate_bc(struct kvm_vcpu *vcpu, u32 inst, bool is_32bit)
+{
+ u64 addr = (s64)(s16)get_d(inst);
+ int bo = get_rt(inst);
+ int bi = get_ra(inst);
+
+ /* If not absolute, PC gets added */
+ if (!(inst & 0x2))
+ addr += kvmppc_get_pc(vcpu);
+ if (is_32bit)
+ addr = (u32)addr;
+
+ /* LR gets set with LK=1 */
+ if (inst & 0x1)
+ kvmppc_set_lr(vcpu, kvmppc_get_pc(vcpu) + 4);
+
+ /* CTR handling */
+ if (!(bo & 0x4)) {
+ ulong ctr = kvmppc_get_ctr(vcpu);
+ ctr--;
+ if (is_32bit)
+ ctr = (u32)ctr;
+ kvmppc_set_ctr(vcpu, ctr);
+ if (((bo & 0x2) && (ctr != 0)) ||
+ (!(bo & 0x2) && (ctr == 0))) {
+ /* Condition not fulfilled, go to next inst */
+ return EMULATE_DONE;
+ }
+ }
+
+ /* CR handling */
+ if (!(bo & 0x10)) {
+ uint32_t mask = 1 << (3 - (bi & 0x3));
+ u32 cr_part = kvmppc_get_cr(vcpu) >> (28 - (bi & ~0x3));
+ if (((bo & 0x8) && (cr_part != mask)) ||
+ (!(bo & 0x8) && (cr_part == mask))) {
+ /* Condition not fulfilled, go to next inst */
+ return EMULATE_DONE;
+ }
+ }
+
+ /* Off we branch ... */
+ kvmppc_set_pc(vcpu, addr);
+
+ /* Indicate that we don't want to advance the PC */
+ return EMULATE_AGAIN;
+}
+
/* Emulates privileged and non-privileged instructions */
int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
{
@@ -536,6 +584,9 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
kvmppc_emulate_cmp(vcpu, value, (s16)get_d(inst), true,
get_rt(inst) >> 2, !(get_rt(inst) & 1));
break;
+ case OP_BC:
+ emulated = kvmppc_emulate_bc(vcpu, inst, is_32bit);
+ break;
case 31:
switch (get_xop(inst)) {
case OP_31_XOP_MFCR:
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 26/33] KVM: PPC: Add mtcrf instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (24 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 25/33] KVM: PPC: Add bc " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 27/33] KVM: PPC: Add xor " Alexander Graf
` (7 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation support for the mtcrf instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 25 +++++++++++++++++++++++++
2 files changed, 26 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index e130156..86c510e 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -94,6 +94,7 @@
#define OP_31_XOP_TRAP_64 68
#define OP_31_XOP_DCBF 86
#define OP_31_XOP_LBZX 87
+#define OP_31_XOP_MTCRF 144
#define OP_31_XOP_STWX 151
#define OP_31_XOP_STBX 215
#define OP_31_XOP_LBZUX 119
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index aeadc30..d6e0e4b 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -521,6 +521,28 @@ int kvmppc_emulate_bc(struct kvm_vcpu *vcpu, u32 inst, bool is_32bit)
return EMULATE_AGAIN;
}
+int kvmppc_emulate_mtcrf(struct kvm_vcpu *vcpu, u32 inst)
+{
+ u32 value = kvmppc_get_cr(vcpu);
+ u32 new_cr = kvmppc_get_gpr(vcpu, get_rs(inst));
+ u32 mask = 0;
+ int fxm = (inst >> 12) & 0xff;
+
+ if (fxm & 0x80) mask |= 0xf0000000;
+ if (fxm & 0x40) mask |= 0x0f000000;
+ if (fxm & 0x20) mask |= 0x00f00000;
+ if (fxm & 0x10) mask |= 0x000f0000;
+ if (fxm & 0x08) mask |= 0x0000f000;
+ if (fxm & 0x04) mask |= 0x00000f00;
+ if (fxm & 0x02) mask |= 0x000000f0;
+ if (fxm & 0x01) mask |= 0x0000000f;
+
+ value = value & ~mask;
+ value |= new_cr & mask;
+ kvmppc_set_cr(vcpu, value);
+ return EMULATE_DONE;
+}
+
/* Emulates privileged and non-privileged instructions */
int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
{
@@ -592,6 +614,9 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
case OP_31_XOP_MFCR:
kvmppc_set_gpr(vcpu, get_rt(inst), kvmppc_get_cr(vcpu));
break;
+ case OP_31_XOP_MTCRF:
+ emulated = kvmppc_emulate_mtcrf(vcpu, inst);
+ break;
case OP_31_XOP_AND:
value = kvmppc_get_gpr(vcpu, get_rs(inst));
value &= kvmppc_get_gpr(vcpu, get_rb(inst));
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 27/33] KVM: PPC: Add xor instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (25 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 26/33] KVM: PPC: Add mtcrf " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 28/33] KVM: PPC: Add oris " Alexander Graf
` (6 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation for the xor instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 8 ++++++++
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 86c510e..f2da847 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -101,6 +101,7 @@
#define OP_31_XOP_STBUX 247
#define OP_31_XOP_LHZX 279
#define OP_31_XOP_LHZUX 311
+#define OP_31_XOP_XOR 316
#define OP_31_XOP_MFSPR 339
#define OP_31_XOP_LHAX 343
#define OP_31_XOP_LHAUX 375
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index d6e0e4b..8c2db76 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -633,6 +633,14 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
kvmppc_emulate_cmp(vcpu, value, 0, true, 0,
is_32bit);
break;
+ case OP_31_XOP_XOR:
+ value = kvmppc_get_gpr(vcpu, get_rs(inst));
+ value ^= kvmppc_get_gpr(vcpu, get_rb(inst));
+ kvmppc_set_gpr(vcpu, get_ra(inst), value);
+ if (get_rc(inst))
+ kvmppc_emulate_cmp(vcpu, value, 0, true, 0,
+ is_32bit);
+ break;
default:
emulated = EMULATE_FAIL;
break;
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 28/33] KVM: PPC: Add oris instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (26 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 27/33] KVM: PPC: Add xor " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 29/33] KVM: PPC: Add rldicr/rldicl/rldic " Alexander Graf
` (5 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation support for the oris instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index f2da847..42aba82 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -120,6 +120,7 @@
#define OP_ADDIS 15
#define OP_BC 16
#define OP_ORI 24
+#define OP_ORIS 25
#define OP_ANDI 28
#define OP_LWZ 32
#define OP_LD 58
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 8c2db76..fe0eb6e 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -595,6 +595,11 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
value |= get_d(inst);
kvmppc_set_gpr(vcpu, get_ra(inst), value);
break;
+ case OP_ORIS:
+ value = kvmppc_get_gpr(vcpu, get_rs(inst));
+ value |= get_d(inst) << 16;
+ kvmppc_set_gpr(vcpu, get_ra(inst), value);
+ break;
case OP_ANDI:
value = kvmppc_get_gpr(vcpu, get_rs(inst));
value &= get_d(inst);
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 29/33] KVM: PPC: Add rldicr/rldicl/rldic instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (27 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 28/33] KVM: PPC: Add oris " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 30/33] KVM: PPC: Add rlwimi " Alexander Graf
` (4 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation for the rldicr, rldicl and rldic instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 61 +++++++++++++++++++++++++++++++++++
2 files changed, 62 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 42aba82..1e80fd2 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -122,6 +122,7 @@
#define OP_ORI 24
#define OP_ORIS 25
#define OP_ANDI 28
+#define OP_RLD 30
#define OP_LWZ 32
#define OP_LD 58
#define OP_LWZU 33
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index fe0eb6e..47b1de8 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -543,6 +543,64 @@ int kvmppc_emulate_mtcrf(struct kvm_vcpu *vcpu, u32 inst)
return EMULATE_DONE;
}
+int kvmppc_emulate_rld(struct kvm_vcpu *vcpu, u32 inst)
+{
+ int sh = (inst >> 11) & 0x1f;
+ int mb = (inst >> 6) & 0x1f;
+ int me;
+ int type = (inst >> 2) & 0x7;
+ u64 source = kvmppc_get_gpr(vcpu, get_rs(inst));
+ u64 dest;
+
+ if (inst & 0x2)
+ sh |= 0x20;
+
+ if (inst & 0x20)
+ mb |= 0x20;
+
+ switch (type) {
+ case 0x0: /* rldicl */
+ me = 63;
+ break;
+ case 0x1: /* rldicr */
+ me = mb;
+ mb = 0;
+ break;
+ case 0x2: /* rldic */
+ me = 63 - sh;
+ break;
+ case 0x3: /* rldimi */
+ case 0x4: /* rldcl, rldcr */
+ default:
+ return EMULATE_FAIL;
+ }
+
+ if (sh && !mb && (me == (63 - sh)))
+ dest = source << sh;
+ else if (sh && (me == 63) && (sh == (64 - mb)))
+ dest = source >> mb;
+ else {
+ u64 mask;
+ dest = (source << sh) | (source >> (63 - sh));
+ if (!mb)
+ mask = -1ULL << (63 - me);
+ else if (me == 63)
+ mask = -1ULL >> mb;
+ else {
+ mask = (-1ULL >> mb) ^ ((-1ULL >> me) >> 1);
+ if (mb > me)
+ mask = ~mask;
+ }
+ dest &= mask;
+ }
+
+ kvmppc_set_gpr(vcpu, get_ra(inst), dest);
+ if (get_rc(inst))
+ kvmppc_emulate_cmp(vcpu, dest, 0, true, 0, false);
+
+ return EMULATE_DONE;
+}
+
/* Emulates privileged and non-privileged instructions */
int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
{
@@ -614,6 +672,9 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
case OP_BC:
emulated = kvmppc_emulate_bc(vcpu, inst, is_32bit);
break;
+ case OP_RLD:
+ emulated = kvmppc_emulate_rld(vcpu, inst);
+ break;
case 31:
switch (get_xop(inst)) {
case OP_31_XOP_MFCR:
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 30/33] KVM: PPC: Add rlwimi instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (28 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 29/33] KVM: PPC: Add rldicr/rldicl/rldic " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 31/33] KVM: PPC: Add rlwinm " Alexander Graf
` (3 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation support for the rlwimi instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 34 ++++++++++++++++++++++++++++++++++
2 files changed, 35 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 1e80fd2..569b518 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -119,6 +119,7 @@
#define OP_CMPI 11
#define OP_ADDIS 15
#define OP_BC 16
+#define OP_RLWIMI 20
#define OP_ORI 24
#define OP_ORIS 25
#define OP_ANDI 28
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 47b1de8..c40f255 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -601,6 +601,37 @@ int kvmppc_emulate_rld(struct kvm_vcpu *vcpu, u32 inst)
return EMULATE_DONE;
}
+int kvmppc_emulate_rlwimi(struct kvm_vcpu *vcpu, u32 inst)
+{
+ int sh = (inst >> 11) & 0x1f;
+ int mb = (inst >> 6) & 0x1f;
+ int me = (inst >> 1) & 0x1f;
+ u32 source = kvmppc_get_gpr(vcpu, get_rs(inst));
+ u32 dest = source;
+ u32 mask;
+
+ if (sh)
+ dest = (source << sh) | (source >> (32 - sh));
+
+ if (!mb)
+ mask = (u32)-1 << (31 - me);
+ else if (me == 31)
+ mask = (u32)-1 >> mb;
+ else {
+ mask = ((u32)-1 >> mb) ^ (((u32)-1 >> me) >> 1);
+ if (mb > me)
+ mask = ~mask;
+ }
+ dest &= mask;
+ dest |= kvmppc_get_gpr(vcpu, get_ra(inst)) & ~mask;
+
+ kvmppc_set_gpr(vcpu, get_ra(inst), dest);
+ if (get_rc(inst))
+ kvmppc_emulate_cmp(vcpu, dest, 0, true, 0, true);
+
+ return EMULATE_DONE;
+}
+
/* Emulates privileged and non-privileged instructions */
int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
{
@@ -675,6 +706,9 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
case OP_RLD:
emulated = kvmppc_emulate_rld(vcpu, inst);
break;
+ case OP_RLWIMI:
+ emulated = kvmppc_emulate_rlwimi(vcpu, inst);
+ break;
case 31:
switch (get_xop(inst)) {
case OP_31_XOP_MFCR:
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 31/33] KVM: PPC: Add rlwinm instruction emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (29 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 30/33] KVM: PPC: Add rlwimi " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 32/33] KVM: PPC: Handle NV registers in emulated critical sections Alexander Graf
` (2 subsequent siblings)
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
This patch adds emulation support for the rlwinm instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kvm/emulate.c | 10 +++++++---
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 569b518..fac38a8 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -120,6 +120,7 @@
#define OP_ADDIS 15
#define OP_BC 16
#define OP_RLWIMI 20
+#define OP_RLWINM 21
#define OP_ORI 24
#define OP_ORIS 25
#define OP_ANDI 28
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index c40f255..1da6691 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -601,7 +601,7 @@ int kvmppc_emulate_rld(struct kvm_vcpu *vcpu, u32 inst)
return EMULATE_DONE;
}
-int kvmppc_emulate_rlwimi(struct kvm_vcpu *vcpu, u32 inst)
+int kvmppc_emulate_rlwi(struct kvm_vcpu *vcpu, u32 inst)
{
int sh = (inst >> 11) & 0x1f;
int mb = (inst >> 6) & 0x1f;
@@ -623,7 +623,8 @@ int kvmppc_emulate_rlwimi(struct kvm_vcpu *vcpu, u32 inst)
mask = ~mask;
}
dest &= mask;
- dest |= kvmppc_get_gpr(vcpu, get_ra(inst)) & ~mask;
+ if (get_op(inst) == OP_RLWIMI)
+ dest |= kvmppc_get_gpr(vcpu, get_ra(inst)) & ~mask;
kvmppc_set_gpr(vcpu, get_ra(inst), dest);
if (get_rc(inst))
@@ -707,7 +708,10 @@ int kvmppc_emulate_any_instruction(struct kvm_vcpu *vcpu)
emulated = kvmppc_emulate_rld(vcpu, inst);
break;
case OP_RLWIMI:
- emulated = kvmppc_emulate_rlwimi(vcpu, inst);
+ emulated = kvmppc_emulate_rlwi(vcpu, inst);
+ break;
+ case OP_RLWINM:
+ emulated = kvmppc_emulate_rlwi(vcpu, inst);
break;
case 31:
switch (get_xop(inst)) {
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 32/33] KVM: PPC: Handle NV registers in emulated critical sections
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (30 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 31/33] KVM: PPC: Add rlwinm " Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-22 21:23 ` [PATCH 33/33] KVM: PPC: Enable critical section emulation Alexander Graf
2014-06-24 18:53 ` [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Scott Wood
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
When we emulate instructions during our critical section emulation we may
overwrite non-volatile registers that the looping code would need to load
back in.
Notify the callers of prepare_to_enter() when we emulated code, so that they
can set enable NV restoration on their exit path.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/book3s_pr.c | 16 +++++++++++++---
arch/powerpc/kvm/booke.c | 3 +++
arch/powerpc/kvm/powerpc.c | 6 +++++-
3 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 3b82e86..8cce531 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1174,10 +1174,20 @@ program_interrupt:
* again due to a host external interrupt.
*/
s = kvmppc_prepare_to_enter(vcpu);
- if (s <= 0)
+ switch (s) {
+ case -EINTR:
r = s;
- else {
- /* interrupts now hard-disabled */
+ break;
+ case 0:
+ /* Exit_reason is set, go to host */
+ r = RESUME_HOST;
+ break;
+ case 2:
+ /* Registers modified, reload then enter */
+ r = RESUME_GUEST_NV;
+ /* fall through */
+ case 1:
+ /* Interrupts now hard-disabled, enter guest */
kvmppc_fix_ee_before_entry();
}
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index c0a71ce..66718d4 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1216,6 +1216,9 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
if (s <= 0)
r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
else {
+ if (s == 2)
+ r = RESUME_GUEST_NV;
+
/* interrupts now hard-disabled */
kvmppc_fix_ee_before_entry();
}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 0a326e1..6757c47 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -103,12 +103,14 @@ static bool kvmppc_needs_emulation(struct kvm_vcpu *vcpu)
*
* returns:
*
+ * == 2 if we're ready to go into guest state with NV registers restored
* == 1 if we're ready to go into guest state
* <= 0 if we need to go back to the host with return value
*/
int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
{
int r;
+ int enter_level = 1;
WARN_ON(irqs_disabled());
hard_irq_disable();
@@ -163,13 +165,15 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
r = kvmppc_emulate_any_instruction(vcpu);
if (r == EMULATE_DO_MMIO)
return 0;
+ if (r == EMULATE_DONE)
+ enter_level = 2;
hard_irq_disable();
continue;
}
kvm_guest_enter();
- return 1;
+ return enter_level;
}
/* return to host */
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [PATCH 33/33] KVM: PPC: Enable critical section emulation
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (31 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 32/33] KVM: PPC: Handle NV registers in emulated critical sections Alexander Graf
@ 2014-06-22 21:23 ` Alexander Graf
2014-06-24 18:53 ` [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Scott Wood
33 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-06-22 21:23 UTC (permalink / raw)
To: kvm-ppc; +Cc: kvm
Now that we have all the bits in place to properly emulate all code that
we should ever encounter in (Linux based guests') critical sections, let's
arm the code to put us into emulation mode when we see it.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
arch/powerpc/kvm/powerpc.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 6757c47..380ed70 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -85,10 +85,6 @@ bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
static bool kvmppc_needs_emulation(struct kvm_vcpu *vcpu)
{
- /* XXX disable emulation for now, until we implemented everything */
- if (true)
- return false;
-
/* We're in a critical section, but an interrupt is pending */
if (kvmppc_critical_section(vcpu) &&
kvmppc_crit_inhibited_irq_pending(vcpu))
--
1.8.1.4
^ permalink raw reply related [flat|nested] 40+ messages in thread
* Re: [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
` (32 preceding siblings ...)
2014-06-22 21:23 ` [PATCH 33/33] KVM: PPC: Enable critical section emulation Alexander Graf
@ 2014-06-24 18:53 ` Scott Wood
2014-06-24 22:41 ` Alexander Graf
33 siblings, 1 reply; 40+ messages in thread
From: Scott Wood @ 2014-06-24 18:53 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
On Sun, 2014-06-22 at 23:23 +0200, Alexander Graf wrote:
> Howdy,
>
> Ben reminded me a while back that we have a nasty race in our KVM PV code.
>
> We replace a few instructions with longer streams of instructions to check
> whether it's necessary to trap out from it (like mtmsr, no need to trap if
> we only disable interrupts). During those replacement chunks we must not get
> any interrupts, because they might overwrite scratch space that we already
> used to save otherwise clobbered register state into.
>
> So we have a thing called "critical sections" which allows us to atomically
> get in and out of "interrupt disabled" modes without touching MSR. When we
> are supposed to deliver an interrupt into the guest while we are in a critical
> section, we just don't inject the interrupt yet, but leave it be until the
> next trap.
>
> However, we never really know when the next trap would be. For all we know it
> could be never. At this point we created a race that is a potential source
> for interrupt loss or at least deferral.
>
> This patch set aims at solving the race. Instead of merely deferring an
> interrupt when we see such a situation, we go into a special instruction
> interpretation mode. In this mode, we interpret all PPC assembler instructions
> that happen until we are out of the critical section again, at which point
> we can now inject the interrupt.
>
> This bug only affects KVM implementations that make use of the magic page, so
> e500v2, book3s_32 and book3s_64 PR KVM.
Would it be possible to single step through the critical section
instead? Or set a high res timer to expire very quickly?
-Scott
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code
2014-06-24 18:53 ` [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Scott Wood
@ 2014-06-24 22:41 ` Alexander Graf
2014-06-24 23:15 ` Scott Wood
0 siblings, 1 reply; 40+ messages in thread
From: Alexander Graf @ 2014-06-24 22:41 UTC (permalink / raw)
To: Scott Wood; +Cc: kvm-ppc, kvm
On 24.06.14 20:53, Scott Wood wrote:
> On Sun, 2014-06-22 at 23:23 +0200, Alexander Graf wrote:
>> Howdy,
>>
>> Ben reminded me a while back that we have a nasty race in our KVM PV code.
>>
>> We replace a few instructions with longer streams of instructions to check
>> whether it's necessary to trap out from it (like mtmsr, no need to trap if
>> we only disable interrupts). During those replacement chunks we must not get
>> any interrupts, because they might overwrite scratch space that we already
>> used to save otherwise clobbered register state into.
>>
>> So we have a thing called "critical sections" which allows us to atomically
>> get in and out of "interrupt disabled" modes without touching MSR. When we
>> are supposed to deliver an interrupt into the guest while we are in a critical
>> section, we just don't inject the interrupt yet, but leave it be until the
>> next trap.
>>
>> However, we never really know when the next trap would be. For all we know it
>> could be never. At this point we created a race that is a potential source
>> for interrupt loss or at least deferral.
>>
>> This patch set aims at solving the race. Instead of merely deferring an
>> interrupt when we see such a situation, we go into a special instruction
>> interpretation mode. In this mode, we interpret all PPC assembler instructions
>> that happen until we are out of the critical section again, at which point
>> we can now inject the interrupt.
>>
>> This bug only affects KVM implementations that make use of the magic page, so
>> e500v2, book3s_32 and book3s_64 PR KVM.
> Would it be possible to single step through the critical section
> instead? Or set a high res timer to expire very quickly?
There are a few other alternatives to this implementation:
1) Unmap the magic page, emulate all memory access to it while in
critical and irq pending
2) Trigger a timer that sends a request to the vcpu to wake it from
potential sleep and inject the irq
3) Single step until we're beyond the critical section
4) Probably more that I can't think of right now :)
Each has their good and bad sides. Unmapping the magic page adds
complexity to the MMU mapping code, since we need to make sure we don't
map it back in on demand and treat faults to it specially.
The timer interrupt works, but I'm not fully convinced that it's a good
idea for things like MC events which we also block during critical
sections on e500v2.
Single stepping is hard enough to get right on interaction between QEMU,
KVM and the guest. I didn't really want to make that stuff any more
complicated.
This approach is really just one out of many - and it's one that's
nicely self-contained and shouldn't have any impact at all on
implementations that don't care about it ;).
Alex
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code
2014-06-24 22:41 ` Alexander Graf
@ 2014-06-24 23:15 ` Scott Wood
2014-06-24 23:40 ` Alexander Graf
0 siblings, 1 reply; 40+ messages in thread
From: Scott Wood @ 2014-06-24 23:15 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
On Wed, 2014-06-25 at 00:41 +0200, Alexander Graf wrote:
> On 24.06.14 20:53, Scott Wood wrote:
> > On Sun, 2014-06-22 at 23:23 +0200, Alexander Graf wrote:
> >> Howdy,
> >>
> >> Ben reminded me a while back that we have a nasty race in our KVM PV code.
> >>
> >> We replace a few instructions with longer streams of instructions to check
> >> whether it's necessary to trap out from it (like mtmsr, no need to trap if
> >> we only disable interrupts). During those replacement chunks we must not get
> >> any interrupts, because they might overwrite scratch space that we already
> >> used to save otherwise clobbered register state into.
> >>
> >> So we have a thing called "critical sections" which allows us to atomically
> >> get in and out of "interrupt disabled" modes without touching MSR. When we
> >> are supposed to deliver an interrupt into the guest while we are in a critical
> >> section, we just don't inject the interrupt yet, but leave it be until the
> >> next trap.
> >>
> >> However, we never really know when the next trap would be. For all we know it
> >> could be never. At this point we created a race that is a potential source
> >> for interrupt loss or at least deferral.
> >>
> >> This patch set aims at solving the race. Instead of merely deferring an
> >> interrupt when we see such a situation, we go into a special instruction
> >> interpretation mode. In this mode, we interpret all PPC assembler instructions
> >> that happen until we are out of the critical section again, at which point
> >> we can now inject the interrupt.
> >>
> >> This bug only affects KVM implementations that make use of the magic page, so
> >> e500v2, book3s_32 and book3s_64 PR KVM.
> > Would it be possible to single step through the critical section
> > instead? Or set a high res timer to expire very quickly?
>
> There are a few other alternatives to this implementation:
>
> 1) Unmap the magic page, emulate all memory access to it while in
> critical and irq pending
> 2) Trigger a timer that sends a request to the vcpu to wake it from
> potential sleep and inject the irq
> 3) Single step until we're beyond the critical section
> 4) Probably more that I can't think of right now :)
>
> Each has their good and bad sides. Unmapping the magic page adds
> complexity to the MMU mapping code, since we need to make sure we don't
> map it back in on demand and treat faults to it specially.
>
> The timer interrupt works, but I'm not fully convinced that it's a good
> idea for things like MC events which we also block during critical
> sections on e500v2.
Are you concerned about the guest seeing machine checks that are (more)
asynchronous with the error condition? e500v2 machine checks are always
asynchronous. From the core manual:
Machine check interrupts are typically caused by a hardware or
memory subsystem failure or by an attempt to access an invalid
address. They may be caused indirectly by execution of an
instruction, but may not be recognized or reported until long
after the processor has executed past the instruction that
caused the machine check. As such, machine check interrupts are
not thought of as synchronous or asynchronous nor as precise or
imprecise.
I don't think the lag would be a problem, and certainly it's better than
the current situation.
> Single stepping is hard enough to get right on interaction between QEMU,
> KVM and the guest. I didn't really want to make that stuff any more
> complicated.
I'm not sure that it would add much complexity. We'd just need to check
whether any source other than the magic page turned wants DCBR0_IC on,
to determine whether to exit to userspace or not.
> This approach is really just one out of many - and it's one that's
> nicely self-contained and shouldn't have any impact at all on
> implementations that don't care about it ;).
"Nicely self-contained" is not a phrase I'd associate with 33 patches,
including a bunch of new emulation that probably isn't getting great
test coverage.
-Scott
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code
2014-06-24 23:15 ` Scott Wood
@ 2014-06-24 23:40 ` Alexander Graf
2014-06-25 0:21 ` Scott Wood
0 siblings, 1 reply; 40+ messages in thread
From: Alexander Graf @ 2014-06-24 23:40 UTC (permalink / raw)
To: Scott Wood; +Cc: kvm-ppc, kvm
On 25.06.14 01:15, Scott Wood wrote:
> On Wed, 2014-06-25 at 00:41 +0200, Alexander Graf wrote:
>> On 24.06.14 20:53, Scott Wood wrote:
>>> On Sun, 2014-06-22 at 23:23 +0200, Alexander Graf wrote:
>>>> Howdy,
>>>>
>>>> Ben reminded me a while back that we have a nasty race in our KVM PV code.
>>>>
>>>> We replace a few instructions with longer streams of instructions to check
>>>> whether it's necessary to trap out from it (like mtmsr, no need to trap if
>>>> we only disable interrupts). During those replacement chunks we must not get
>>>> any interrupts, because they might overwrite scratch space that we already
>>>> used to save otherwise clobbered register state into.
>>>>
>>>> So we have a thing called "critical sections" which allows us to atomically
>>>> get in and out of "interrupt disabled" modes without touching MSR. When we
>>>> are supposed to deliver an interrupt into the guest while we are in a critical
>>>> section, we just don't inject the interrupt yet, but leave it be until the
>>>> next trap.
>>>>
>>>> However, we never really know when the next trap would be. For all we know it
>>>> could be never. At this point we created a race that is a potential source
>>>> for interrupt loss or at least deferral.
>>>>
>>>> This patch set aims at solving the race. Instead of merely deferring an
>>>> interrupt when we see such a situation, we go into a special instruction
>>>> interpretation mode. In this mode, we interpret all PPC assembler instructions
>>>> that happen until we are out of the critical section again, at which point
>>>> we can now inject the interrupt.
>>>>
>>>> This bug only affects KVM implementations that make use of the magic page, so
>>>> e500v2, book3s_32 and book3s_64 PR KVM.
>>> Would it be possible to single step through the critical section
>>> instead? Or set a high res timer to expire very quickly?
>> There are a few other alternatives to this implementation:
>>
>> 1) Unmap the magic page, emulate all memory access to it while in
>> critical and irq pending
>> 2) Trigger a timer that sends a request to the vcpu to wake it from
>> potential sleep and inject the irq
>> 3) Single step until we're beyond the critical section
>> 4) Probably more that I can't think of right now :)
>>
>> Each has their good and bad sides. Unmapping the magic page adds
>> complexity to the MMU mapping code, since we need to make sure we don't
>> map it back in on demand and treat faults to it specially.
>>
>> The timer interrupt works, but I'm not fully convinced that it's a good
>> idea for things like MC events which we also block during critical
>> sections on e500v2.
> Are you concerned about the guest seeing machine checks that are (more)
> asynchronous with the error condition? e500v2 machine checks are always
> asynchronous. From the core manual:
>
> Machine check interrupts are typically caused by a hardware or
> memory subsystem failure or by an attempt to access an invalid
> address. They may be caused indirectly by execution of an
> instruction, but may not be recognized or reported until long
> after the processor has executed past the instruction that
> caused the machine check. As such, machine check interrupts are
> not thought of as synchronous or asynchronous nor as precise or
> imprecise.
>
> I don't think the lag would be a problem, and certainly it's better than
> the current situation.
So what value would you set the timer to? If the value is too small, we
never finish the critical section. If it's too big, we add lots of jitter.
>
>> Single stepping is hard enough to get right on interaction between QEMU,
>> KVM and the guest. I didn't really want to make that stuff any more
>> complicated.
> I'm not sure that it would add much complexity. We'd just need to check
> whether any source other than the magic page turned wants DCBR0_IC on,
> to determine whether to exit to userspace or not.
What if the guest is single stepping itself? How do we determine when to
unset the bit again? When we get out of the critical section? How do we
know what the value was before we set it?
>
>> This approach is really just one out of many - and it's one that's
>> nicely self-contained and shouldn't have any impact at all on
>> implementations that don't care about it ;).
> "Nicely self-contained" is not a phrase I'd associate with 33 patches,
> including a bunch of new emulation that probably isn't getting great
> test coverage.
It means that there's only a single entry point for when the code gets
executed, not that it's very little code.
Eventually this emulation code should get merged with the already
existing in-kernel emulation code. Paul had already started work to
merge the emulators a while ago. He even measured speedups when he sent
all real mode and split real mode code via the interpreter rather than
the entry/exit dance we do today.
Alex
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code
2014-06-24 23:40 ` Alexander Graf
@ 2014-06-25 0:21 ` Scott Wood
2014-07-28 14:10 ` Alexander Graf
0 siblings, 1 reply; 40+ messages in thread
From: Scott Wood @ 2014-06-25 0:21 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm
On Wed, 2014-06-25 at 01:40 +0200, Alexander Graf wrote:
> On 25.06.14 01:15, Scott Wood wrote:
> > On Wed, 2014-06-25 at 00:41 +0200, Alexander Graf wrote:
> >> On 24.06.14 20:53, Scott Wood wrote:
> >>> The timer interrupt works, but I'm not fully convinced that it's a good
> >> idea for things like MC events which we also block during critical
> >> sections on e500v2.
> > Are you concerned about the guest seeing machine checks that are (more)
> > asynchronous with the error condition? e500v2 machine checks are always
> > asynchronous. From the core manual:
> >
> > Machine check interrupts are typically caused by a hardware or
> > memory subsystem failure or by an attempt to access an invalid
> > address. They may be caused indirectly by execution of an
> > instruction, but may not be recognized or reported until long
> > after the processor has executed past the instruction that
> > caused the machine check. As such, machine check interrupts are
> > not thought of as synchronous or asynchronous nor as precise or
> > imprecise.
> >
> > I don't think the lag would be a problem, and certainly it's better than
> > the current situation.
>
> So what value would you set the timer to? If the value is too small, we
> never finish the critical section. If it's too big, we add lots of jitter.
Maybe something like 100us?
Single stepping would be better, though.
> >> Single stepping is hard enough to get right on interaction between QEMU,
> >> KVM and the guest. I didn't really want to make that stuff any more
> >> complicated.
> > I'm not sure that it would add much complexity. We'd just need to check
> > whether any source other than the magic page turned wants DCBR0_IC on,
> > to determine whether to exit to userspace or not.
>
> What if the guest is single stepping itself? How do we determine when to
> unset the bit again? When we get out of the critical section? How do we
> know what the value was before we set it?
Keep track of each requester of single stepping separately, and only
ever set the real bit by ORing them.
-Scott
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code
2014-06-25 0:21 ` Scott Wood
@ 2014-07-28 14:10 ` Alexander Graf
0 siblings, 0 replies; 40+ messages in thread
From: Alexander Graf @ 2014-07-28 14:10 UTC (permalink / raw)
To: Scott Wood; +Cc: kvm-ppc, kvm
On 25.06.14 02:21, Scott Wood wrote:
> On Wed, 2014-06-25 at 01:40 +0200, Alexander Graf wrote:
>> On 25.06.14 01:15, Scott Wood wrote:
>>> On Wed, 2014-06-25 at 00:41 +0200, Alexander Graf wrote:
>>>> On 24.06.14 20:53, Scott Wood wrote:
>>>>> The timer interrupt works, but I'm not fully convinced that it's a good
>>>> idea for things like MC events which we also block during critical
>>>> sections on e500v2.
>>> Are you concerned about the guest seeing machine checks that are (more)
>>> asynchronous with the error condition? e500v2 machine checks are always
>>> asynchronous. From the core manual:
>>>
>>> Machine check interrupts are typically caused by a hardware or
>>> memory subsystem failure or by an attempt to access an invalid
>>> address. They may be caused indirectly by execution of an
>>> instruction, but may not be recognized or reported until long
>>> after the processor has executed past the instruction that
>>> caused the machine check. As such, machine check interrupts are
>>> not thought of as synchronous or asynchronous nor as precise or
>>> imprecise.
>>>
>>> I don't think the lag would be a problem, and certainly it's better than
>>> the current situation.
>> So what value would you set the timer to? If the value is too small, we
>> never finish the critical section. If it's too big, we add lots of jitter.
> Maybe something like 100us?
>
> Single stepping would be better, though.
>
>>>> Single stepping is hard enough to get right on interaction between QEMU,
>>>> KVM and the guest. I didn't really want to make that stuff any more
>>>> complicated.
>>> I'm not sure that it would add much complexity. We'd just need to check
>>> whether any source other than the magic page turned wants DCBR0_IC on,
>>> to determine whether to exit to userspace or not.
>> What if the guest is single stepping itself? How do we determine when to
>> unset the bit again? When we get out of the critical section? How do we
>> know what the value was before we set it?
> Keep track of each requester of single stepping separately, and only
> ever set the real bit by ORing them.
Considering that Paul started working on integrating the in-kernel
emulator with KVM I think we're best off to just wait for that one and
then use it :).
Alex
^ permalink raw reply [flat|nested] 40+ messages in thread
end of thread, other threads:[~2014-07-28 14:10 UTC | newest]
Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-22 21:23 [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Alexander Graf
2014-06-22 21:23 ` [PATCH 01/33] KVM: PPC: Implement kvmppc_xlate for all targets Alexander Graf
2014-06-22 21:23 ` [PATCH 02/33] KVM: PPC: Move kvmppc_ld/st to common code Alexander Graf
2014-06-22 21:23 ` [PATCH 03/33] KVM: PPC: Remove kvmppc_bad_hva() Alexander Graf
2014-06-22 21:23 ` [PATCH 04/33] KVM: PPC: Propagate kvmppc_xlate errors properly Alexander Graf
2014-06-22 21:23 ` [PATCH 05/33] KVM: PPC: Use kvm_read_guest in kvmppc_ld Alexander Graf
2014-06-22 21:23 ` [PATCH 06/33] KVM: PPC: Handle magic page in kvmppc_ld/st Alexander Graf
2014-06-22 21:23 ` [PATCH 07/33] KVM: PPC: Separate loadstore emulation from priv emulation Alexander Graf
2014-06-22 21:23 ` [PATCH 08/33] KVM: PPC: Introduce emulation for unprivileged instructions Alexander Graf
2014-06-22 21:23 ` [PATCH 09/33] KVM: PPC: Move critical section detection to common code Alexander Graf
2014-06-22 21:23 ` [PATCH 10/33] KVM: PPC: Make critical section detection conditional Alexander Graf
2014-06-22 21:23 ` [PATCH 11/33] KVM: PPC: BookE: Use common critical section helper Alexander Graf
2014-06-22 21:23 ` [PATCH 12/33] KVM: PPC: Emulate critical sections when we hit them Alexander Graf
2014-06-22 21:23 ` [PATCH 13/33] KVM: PPC: Expose helper functions for data/inst faults Alexander Graf
2014-06-22 21:23 ` [PATCH 14/33] KVM: PPC: Add std instruction emulation Alexander Graf
2014-06-22 21:23 ` [PATCH 15/33] KVM: PPC: Add stw " Alexander Graf
2014-06-22 21:23 ` [PATCH 16/33] KVM: PPC: Add ld " Alexander Graf
2014-06-22 21:23 ` [PATCH 17/33] KVM: PPC: Add lwz " Alexander Graf
2014-06-22 21:23 ` [PATCH 18/33] KVM: PPC: Add mfcr " Alexander Graf
2014-06-22 21:23 ` [PATCH 19/33] KVM: PPC: Add addis " Alexander Graf
2014-06-22 21:23 ` [PATCH 20/33] KVM: PPC: Add ori " Alexander Graf
2014-06-22 21:23 ` [PATCH 21/33] KVM: PPC: Add and " Alexander Graf
2014-06-22 21:23 ` [PATCH 22/33] KVM: PPC: Add andi. " Alexander Graf
2014-06-22 21:23 ` [PATCH 23/33] KVM: PPC: Add or " Alexander Graf
2014-06-22 21:23 ` [PATCH 24/33] KVM: PPC: Add cmpwi/cmpdi " Alexander Graf
2014-06-22 21:23 ` [PATCH 25/33] KVM: PPC: Add bc " Alexander Graf
2014-06-22 21:23 ` [PATCH 26/33] KVM: PPC: Add mtcrf " Alexander Graf
2014-06-22 21:23 ` [PATCH 27/33] KVM: PPC: Add xor " Alexander Graf
2014-06-22 21:23 ` [PATCH 28/33] KVM: PPC: Add oris " Alexander Graf
2014-06-22 21:23 ` [PATCH 29/33] KVM: PPC: Add rldicr/rldicl/rldic " Alexander Graf
2014-06-22 21:23 ` [PATCH 30/33] KVM: PPC: Add rlwimi " Alexander Graf
2014-06-22 21:23 ` [PATCH 31/33] KVM: PPC: Add rlwinm " Alexander Graf
2014-06-22 21:23 ` [PATCH 32/33] KVM: PPC: Handle NV registers in emulated critical sections Alexander Graf
2014-06-22 21:23 ` [PATCH 33/33] KVM: PPC: Enable critical section emulation Alexander Graf
2014-06-24 18:53 ` [PATCH 00/33] KVM: PPC: Fix IRQ race in magic page code Scott Wood
2014-06-24 22:41 ` Alexander Graf
2014-06-24 23:15 ` Scott Wood
2014-06-24 23:40 ` Alexander Graf
2014-06-25 0:21 ` Scott Wood
2014-07-28 14:10 ` Alexander Graf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).