* [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
@ 2014-12-07 9:37 Nikolay Nikolaev
2014-12-07 9:37 ` [PATCH v2 1/5] KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks Nikolay Nikolaev
` (5 more replies)
0 siblings, 6 replies; 17+ messages in thread
From: Nikolay Nikolaev @ 2014-12-07 9:37 UTC (permalink / raw)
To: linux-arm-kernel
The IOEVENTFD KVM capability is a prerequisite for vhost support.
This series enables the ioeventfd KVM capability on ARM.
The implementation routes MMIO access in the IO abort handler to the KVM IO bus.
If there is already a registered ioeventfd handler for this address, the file
descriptor will be triggered.
We extended the KVM IO bus API to expose the VCPU struct pointer. Now the VGIC
MMIO access is done through this API. For this to operate the VGIC registers a
kvm_io_device which represents the whole dist MMIO region.
The patches are implemented on top of the latest Andre's vGICv3 work from here:
http://www.linux-arm.org/git?p=linux-ap.git;a=shortlog;h=refs/heads/kvm-gicv3/v4
The code was tested on Dual Cortex-A15 Exynos5250 (ARM Chromebook).
ARM64 build was verified, but not run on actual HW.
Changes since v1:
- fixed x86 compilation
- GICv3/GICv3 dist base selection
- added vgic_unregister_kvm_io_dev to free the iodev resources
- enable eventfd on ARM64
---
Nikolay Nikolaev (5):
KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks.
KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
KVM: ARM VGIC add kvm_io_bus_ frontend
ARM/ARM64: enable linking against eventfd
ARM: enable KVM_CAP_IOEVENTFD
arch/arm/kvm/Kconfig | 1
arch/arm/kvm/Makefile | 2 -
arch/arm/kvm/arm.c | 3 +
arch/arm/kvm/mmio.c | 32 +++++++++++
arch/arm64/kvm/Kconfig | 1
arch/arm64/kvm/Makefile | 2 -
arch/ia64/kvm/kvm-ia64.c | 4 +
arch/powerpc/kvm/mpic.c | 10 ++-
arch/powerpc/kvm/powerpc.c | 4 +
arch/s390/kvm/diag.c | 2 -
arch/x86/kvm/i8254.c | 14 +++--
arch/x86/kvm/i8259.c | 12 ++--
arch/x86/kvm/lapic.c | 4 +
arch/x86/kvm/vmx.c | 2 -
arch/x86/kvm/x86.c | 13 ++---
include/kvm/arm_vgic.h | 3 -
include/linux/kvm_host.h | 10 ++-
virt/kvm/arm/vgic.c | 127 +++++++++++++++++++++++++++++++++++++++++---
virt/kvm/coalesced_mmio.c | 5 +-
virt/kvm/eventfd.c | 4 +
virt/kvm/ioapic.c | 8 +--
virt/kvm/iodev.h | 23 +++++---
virt/kvm/kvm_main.c | 32 ++++++-----
23 files changed, 237 insertions(+), 81 deletions(-)
--
Signature
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 1/5] KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks.
2014-12-07 9:37 [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Nikolay Nikolaev
@ 2014-12-07 9:37 ` Nikolay Nikolaev
2015-01-12 17:10 ` Eric Auger
2014-12-07 9:37 ` [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus Nikolay Nikolaev
` (4 subsequent siblings)
5 siblings, 1 reply; 17+ messages in thread
From: Nikolay Nikolaev @ 2014-12-07 9:37 UTC (permalink / raw)
To: linux-arm-kernel
This is needed in e.g. ARM vGIC emulation, where the MMIO handling
depends on the VCPU that does the access.
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
---
arch/ia64/kvm/kvm-ia64.c | 4 ++--
arch/powerpc/kvm/mpic.c | 10 ++++++----
arch/powerpc/kvm/powerpc.c | 4 ++--
arch/s390/kvm/diag.c | 2 +-
arch/x86/kvm/i8254.c | 14 +++++++++-----
arch/x86/kvm/i8259.c | 12 ++++++------
arch/x86/kvm/lapic.c | 4 ++--
arch/x86/kvm/vmx.c | 2 +-
arch/x86/kvm/x86.c | 13 +++++++------
include/linux/kvm_host.h | 10 +++++-----
virt/kvm/coalesced_mmio.c | 5 +++--
virt/kvm/eventfd.c | 4 ++--
virt/kvm/ioapic.c | 8 ++++----
virt/kvm/iodev.h | 23 +++++++++++++++--------
virt/kvm/kvm_main.c | 32 ++++++++++++++++----------------
15 files changed, 81 insertions(+), 66 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index dbe46f4..287df8d 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -246,10 +246,10 @@ static int handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
return 0;
mmio:
if (p->dir)
- r = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, p->addr,
+ r = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, p->addr,
p->size, &p->data);
else
- r = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, p->addr,
+ r = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, p->addr,
p->size, &p->data);
if (r)
printk(KERN_ERR"kvm: No iodevice found! addr:%lx\n", p->addr);
diff --git a/arch/powerpc/kvm/mpic.c b/arch/powerpc/kvm/mpic.c
index 39b3a8f..8542f07 100644
--- a/arch/powerpc/kvm/mpic.c
+++ b/arch/powerpc/kvm/mpic.c
@@ -1374,8 +1374,9 @@ static int kvm_mpic_write_internal(struct openpic *opp, gpa_t addr, u32 val)
return -ENXIO;
}
-static int kvm_mpic_read(struct kvm_io_device *this, gpa_t addr,
- int len, void *ptr)
+static int kvm_mpic_read(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this,
+ gpa_t addr, int len, void *ptr)
{
struct openpic *opp = container_of(this, struct openpic, mmio);
int ret;
@@ -1415,8 +1416,9 @@ static int kvm_mpic_read(struct kvm_io_device *this, gpa_t addr,
return ret;
}
-static int kvm_mpic_write(struct kvm_io_device *this, gpa_t addr,
- int len, const void *ptr)
+static int kvm_mpic_write(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this,
+ gpa_t addr, int len, const void *ptr)
{
struct openpic *opp = container_of(this, struct openpic, mmio);
int ret;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index c1f8f53..5ac065b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -814,7 +814,7 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
idx = srcu_read_lock(&vcpu->kvm->srcu);
- ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, run->mmio.phys_addr,
+ ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, run->mmio.phys_addr,
bytes, &run->mmio.data);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
@@ -887,7 +887,7 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
idx = srcu_read_lock(&vcpu->kvm->srcu);
- ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, run->mmio.phys_addr,
+ ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, run->mmio.phys_addr,
bytes, &run->mmio.data);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
index 9254aff..329ec75 100644
--- a/arch/s390/kvm/diag.c
+++ b/arch/s390/kvm/diag.c
@@ -213,7 +213,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu)
* - gpr 3 contains the virtqueue index (passed as datamatch)
* - gpr 4 contains the index on the bus (optionally)
*/
- ret = kvm_io_bus_write_cookie(vcpu->kvm, KVM_VIRTIO_CCW_NOTIFY_BUS,
+ ret = kvm_io_bus_write_cookie(vcpu, KVM_VIRTIO_CCW_NOTIFY_BUS,
vcpu->run->s.regs.gprs[2] & 0xffffffff,
8, &vcpu->run->s.regs.gprs[3],
vcpu->run->s.regs.gprs[4]);
diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index 298781d..4dce6f8 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -443,7 +443,8 @@ static inline int pit_in_range(gpa_t addr)
(addr < KVM_PIT_BASE_ADDRESS + KVM_PIT_MEM_LENGTH));
}
-static int pit_ioport_write(struct kvm_io_device *this,
+static int pit_ioport_write(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this,
gpa_t addr, int len, const void *data)
{
struct kvm_pit *pit = dev_to_pit(this);
@@ -519,7 +520,8 @@ static int pit_ioport_write(struct kvm_io_device *this,
return 0;
}
-static int pit_ioport_read(struct kvm_io_device *this,
+static int pit_ioport_read(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this,
gpa_t addr, int len, void *data)
{
struct kvm_pit *pit = dev_to_pit(this);
@@ -589,7 +591,8 @@ static int pit_ioport_read(struct kvm_io_device *this,
return 0;
}
-static int speaker_ioport_write(struct kvm_io_device *this,
+static int speaker_ioport_write(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this,
gpa_t addr, int len, const void *data)
{
struct kvm_pit *pit = speaker_to_pit(this);
@@ -606,8 +609,9 @@ static int speaker_ioport_write(struct kvm_io_device *this,
return 0;
}
-static int speaker_ioport_read(struct kvm_io_device *this,
- gpa_t addr, int len, void *data)
+static int speaker_ioport_read(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this,
+ gpa_t addr, int len, void *data)
{
struct kvm_pit *pit = speaker_to_pit(this);
struct kvm_kpit_state *pit_state = &pit->pit_state;
diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
index cc31f7c..8ff4eaa 100644
--- a/arch/x86/kvm/i8259.c
+++ b/arch/x86/kvm/i8259.c
@@ -528,42 +528,42 @@ static int picdev_read(struct kvm_pic *s,
return 0;
}
-static int picdev_master_write(struct kvm_io_device *dev,
+static int picdev_master_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
gpa_t addr, int len, const void *val)
{
return picdev_write(container_of(dev, struct kvm_pic, dev_master),
addr, len, val);
}
-static int picdev_master_read(struct kvm_io_device *dev,
+static int picdev_master_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
gpa_t addr, int len, void *val)
{
return picdev_read(container_of(dev, struct kvm_pic, dev_master),
addr, len, val);
}
-static int picdev_slave_write(struct kvm_io_device *dev,
+static int picdev_slave_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
gpa_t addr, int len, const void *val)
{
return picdev_write(container_of(dev, struct kvm_pic, dev_slave),
addr, len, val);
}
-static int picdev_slave_read(struct kvm_io_device *dev,
+static int picdev_slave_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
gpa_t addr, int len, void *val)
{
return picdev_read(container_of(dev, struct kvm_pic, dev_slave),
addr, len, val);
}
-static int picdev_eclr_write(struct kvm_io_device *dev,
+static int picdev_eclr_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
gpa_t addr, int len, const void *val)
{
return picdev_write(container_of(dev, struct kvm_pic, dev_eclr),
addr, len, val);
}
-static int picdev_eclr_read(struct kvm_io_device *dev,
+static int picdev_eclr_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
gpa_t addr, int len, void *val)
{
return picdev_read(container_of(dev, struct kvm_pic, dev_eclr),
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index b8345dd..c071c36 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1007,7 +1007,7 @@ static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)
addr < apic->base_address + LAPIC_MMIO_LENGTH;
}
-static int apic_mmio_read(struct kvm_io_device *this,
+static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
gpa_t address, int len, void *data)
{
struct kvm_lapic *apic = to_lapic(this);
@@ -1253,7 +1253,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
return ret;
}
-static int apic_mmio_write(struct kvm_io_device *this,
+static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
gpa_t address, int len, const void *data)
{
struct kvm_lapic *apic = to_lapic(this);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 3e556c6..e6d9f01 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5620,7 +5620,7 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
gpa_t gpa;
gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
- if (!kvm_io_bus_write(vcpu->kvm, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
+ if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
skip_emulated_instruction(vcpu);
return 1;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0033df3..e0462db 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4052,8 +4052,8 @@ static int vcpu_mmio_write(struct kvm_vcpu *vcpu, gpa_t addr, int len,
do {
n = min(len, 8);
if (!(vcpu->arch.apic &&
- !kvm_iodevice_write(&vcpu->arch.apic->dev, addr, n, v))
- && kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, addr, n, v))
+ !kvm_iodevice_write(vcpu, &vcpu->arch.apic->dev, addr, n, v))
+ && kvm_io_bus_write(vcpu, KVM_MMIO_BUS, addr, n, v))
break;
handled += n;
addr += n;
@@ -4072,8 +4072,9 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v)
do {
n = min(len, 8);
if (!(vcpu->arch.apic &&
- !kvm_iodevice_read(&vcpu->arch.apic->dev, addr, n, v))
- && kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, addr, n, v))
+ !kvm_iodevice_read(vcpu, &vcpu->arch.apic->dev,
+ addr, n, v))
+ && kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, n, v))
break;
trace_kvm_mmio(KVM_TRACE_MMIO_READ, n, addr, *(u64 *)v);
handled += n;
@@ -4565,10 +4566,10 @@ static int kernel_pio(struct kvm_vcpu *vcpu, void *pd)
int r;
if (vcpu->arch.pio.in)
- r = kvm_io_bus_read(vcpu->kvm, KVM_PIO_BUS, vcpu->arch.pio.port,
+ r = kvm_io_bus_read(vcpu, KVM_PIO_BUS, vcpu->arch.pio.port,
vcpu->arch.pio.size, pd);
else
- r = kvm_io_bus_write(vcpu->kvm, KVM_PIO_BUS,
+ r = kvm_io_bus_write(vcpu, KVM_PIO_BUS,
vcpu->arch.pio.port, vcpu->arch.pio.size,
pd);
return r;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index b1aa88d..f0d1ce7 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -168,12 +168,12 @@ enum kvm_bus {
KVM_NR_BUSES
};
-int kvm_io_bus_write(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+int kvm_io_bus_write(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
int len, const void *val);
-int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
- int len, const void *val, long cookie);
-int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr, int len,
- void *val);
+int kvm_io_bus_write_cookie(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx,
+ gpa_t addr, int len, const void *val, long cookie);
+int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
+ int len, void *val);
int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
int len, struct kvm_io_device *dev);
int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
index 00d8642..c831a40 100644
--- a/virt/kvm/coalesced_mmio.c
+++ b/virt/kvm/coalesced_mmio.c
@@ -60,8 +60,9 @@ static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev)
return 1;
}
-static int coalesced_mmio_write(struct kvm_io_device *this,
- gpa_t addr, int len, const void *val)
+static int coalesced_mmio_write(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this, gpa_t addr,
+ int len, const void *val)
{
struct kvm_coalesced_mmio_dev *dev = to_mmio(this);
struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring;
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index b0fb390..7202039 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -719,8 +719,8 @@ ioeventfd_in_range(struct _ioeventfd *p, gpa_t addr, int len, const void *val)
/* MMIO/PIO writes trigger an event if the addr/val match */
static int
-ioeventfd_write(struct kvm_io_device *this, gpa_t addr, int len,
- const void *val)
+ioeventfd_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this, gpa_t addr,
+ int len, const void *val)
{
struct _ioeventfd *p = to_ioeventfd(this);
diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index 0ba4057..a032909 100644
--- a/virt/kvm/ioapic.c
+++ b/virt/kvm/ioapic.c
@@ -505,8 +505,8 @@ static inline int ioapic_in_range(struct kvm_ioapic *ioapic, gpa_t addr)
(addr < ioapic->base_address + IOAPIC_MEM_LENGTH)));
}
-static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len,
- void *val)
+static int ioapic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ gpa_t addr, int len, void *val)
{
struct kvm_ioapic *ioapic = to_ioapic(this);
u32 result;
@@ -548,8 +548,8 @@ static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len,
return 0;
}
-static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len,
- const void *val)
+static int ioapic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ gpa_t addr, int len, const void *val)
{
struct kvm_ioapic *ioapic = to_ioapic(this);
u32 data;
diff --git a/virt/kvm/iodev.h b/virt/kvm/iodev.h
index 12fd3ca..7dd88cb 100644
--- a/virt/kvm/iodev.h
+++ b/virt/kvm/iodev.h
@@ -20,6 +20,7 @@
#include <asm/errno.h>
struct kvm_io_device;
+struct kvm_vcpu;
/**
* kvm_io_device_ops are called under kvm slots_lock.
@@ -27,11 +28,13 @@ struct kvm_io_device;
* or non-zero to have it passed to the next device.
**/
struct kvm_io_device_ops {
- int (*read)(struct kvm_io_device *this,
+ int (*read)(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this,
gpa_t addr,
int len,
void *val);
- int (*write)(struct kvm_io_device *this,
+ int (*write)(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *this,
gpa_t addr,
int len,
const void *val);
@@ -49,16 +52,20 @@ static inline void kvm_iodevice_init(struct kvm_io_device *dev,
dev->ops = ops;
}
-static inline int kvm_iodevice_read(struct kvm_io_device *dev,
- gpa_t addr, int l, void *v)
+static inline int kvm_iodevice_read(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *dev, gpa_t addr,
+ int l, void *v)
{
- return dev->ops->read ? dev->ops->read(dev, addr, l, v) : -EOPNOTSUPP;
+ return dev->ops->read ? dev->ops->read(vcpu, dev, addr, l, v)
+ : -EOPNOTSUPP;
}
-static inline int kvm_iodevice_write(struct kvm_io_device *dev,
- gpa_t addr, int l, const void *v)
+static inline int kvm_iodevice_write(struct kvm_vcpu *vcpu,
+ struct kvm_io_device *dev, gpa_t addr,
+ int l, const void *v)
{
- return dev->ops->write ? dev->ops->write(dev, addr, l, v) : -EOPNOTSUPP;
+ return dev->ops->write ? dev->ops->write(vcpu, dev, addr, l, v)
+ : -EOPNOTSUPP;
}
static inline void kvm_iodevice_destructor(struct kvm_io_device *dev)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 3cee7b1..16e5152 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2924,7 +2924,7 @@ static int kvm_io_bus_get_first_dev(struct kvm_io_bus *bus,
return off;
}
-static int __kvm_io_bus_write(struct kvm_io_bus *bus,
+static int __kvm_io_bus_write(struct kvm_vcpu *vcpu, struct kvm_io_bus *bus,
struct kvm_io_range *range, const void *val)
{
int idx;
@@ -2935,7 +2935,7 @@ static int __kvm_io_bus_write(struct kvm_io_bus *bus,
while (idx < bus->dev_count &&
kvm_io_bus_cmp(range, &bus->range[idx]) == 0) {
- if (!kvm_iodevice_write(bus->range[idx].dev, range->addr,
+ if (!kvm_iodevice_write(vcpu, bus->range[idx].dev, range->addr,
range->len, val))
return idx;
idx++;
@@ -2945,7 +2945,7 @@ static int __kvm_io_bus_write(struct kvm_io_bus *bus,
}
/* kvm_io_bus_write - called under kvm->slots_lock */
-int kvm_io_bus_write(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+int kvm_io_bus_write(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
int len, const void *val)
{
struct kvm_io_bus *bus;
@@ -2957,14 +2957,14 @@ int kvm_io_bus_write(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
.len = len,
};
- bus = srcu_dereference(kvm->buses[bus_idx], &kvm->srcu);
- r = __kvm_io_bus_write(bus, &range, val);
+ bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &kvm->srcu);
+ r = __kvm_io_bus_write(vcpu, bus, &range, val);
return r < 0 ? r : 0;
}
/* kvm_io_bus_write_cookie - called under kvm->slots_lock */
-int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
- int len, const void *val, long cookie)
+int kvm_io_bus_write_cookie(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx,
+ gpa_t addr, int len, const void *val, long cookie)
{
struct kvm_io_bus *bus;
struct kvm_io_range range;
@@ -2974,12 +2974,12 @@ int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
.len = len,
};
- bus = srcu_dereference(kvm->buses[bus_idx], &kvm->srcu);
+ bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &kvm->srcu);
/* First try the device referenced by cookie. */
if ((cookie >= 0) && (cookie < bus->dev_count) &&
(kvm_io_bus_cmp(&range, &bus->range[cookie]) == 0))
- if (!kvm_iodevice_write(bus->range[cookie].dev, addr, len,
+ if (!kvm_iodevice_write(vcpu, bus->range[cookie].dev, addr, len,
val))
return cookie;
@@ -2987,11 +2987,11 @@ int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
* cookie contained garbage; fall back to search and return the
* correct cookie value.
*/
- return __kvm_io_bus_write(bus, &range, val);
+ return __kvm_io_bus_write(vcpu, bus, &range, val);
}
-static int __kvm_io_bus_read(struct kvm_io_bus *bus, struct kvm_io_range *range,
- void *val)
+static int __kvm_io_bus_read(struct kvm_vcpu *vcpu, struct kvm_io_bus *bus,
+ struct kvm_io_range *range, void *val)
{
int idx;
@@ -3001,7 +3001,7 @@ static int __kvm_io_bus_read(struct kvm_io_bus *bus, struct kvm_io_range *range,
while (idx < bus->dev_count &&
kvm_io_bus_cmp(range, &bus->range[idx]) == 0) {
- if (!kvm_iodevice_read(bus->range[idx].dev, range->addr,
+ if (!kvm_iodevice_read(vcpu, bus->range[idx].dev, range->addr,
range->len, val))
return idx;
idx++;
@@ -3012,7 +3012,7 @@ static int __kvm_io_bus_read(struct kvm_io_bus *bus, struct kvm_io_range *range,
EXPORT_SYMBOL_GPL(kvm_io_bus_write);
/* kvm_io_bus_read - called under kvm->slots_lock */
-int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
int len, void *val)
{
struct kvm_io_bus *bus;
@@ -3024,8 +3024,8 @@ int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
.len = len,
};
- bus = srcu_dereference(kvm->buses[bus_idx], &kvm->srcu);
- r = __kvm_io_bus_read(bus, &range, val);
+ bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &kvm->srcu);
+ r = __kvm_io_bus_read(vcpu, bus, &range, val);
return r < 0 ? r : 0;
}
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
2014-12-07 9:37 [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Nikolay Nikolaev
2014-12-07 9:37 ` [PATCH v2 1/5] KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks Nikolay Nikolaev
@ 2014-12-07 9:37 ` Nikolay Nikolaev
2015-01-12 17:09 ` Eric Auger
2014-12-07 9:37 ` [PATCH v2 3/5] KVM: ARM VGIC add kvm_io_bus_ frontend Nikolay Nikolaev
` (3 subsequent siblings)
5 siblings, 1 reply; 17+ messages in thread
From: Nikolay Nikolaev @ 2014-12-07 9:37 UTC (permalink / raw)
To: linux-arm-kernel
On IO memory abort, try to handle the MMIO access thorugh the KVM
registered read/write callbacks. This is done by invoking the relevant
kvm_io_bus_* API.
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
---
arch/arm/kvm/mmio.c | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
index 4cb5a93..e42469f 100644
--- a/arch/arm/kvm/mmio.c
+++ b/arch/arm/kvm/mmio.c
@@ -162,6 +162,36 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
return 0;
}
+/**
+ * handle_kernel_mmio - handle an in-kernel MMIO access
+ * @vcpu: pointer to the vcpu performing the access
+ * @run: pointer to the kvm_run structure
+ * @mmio: pointer to the data describing the access
+ *
+ * returns true if the MMIO access has been performed in kernel space,
+ * and false if it needs to be emulated in user space.
+ */
+static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_exit_mmio *mmio)
+{
+ int ret;
+
+ if (mmio->is_write) {
+ ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
+ mmio->len, &mmio->data);
+
+ } else {
+ ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
+ mmio->len, &mmio->data);
+ }
+ if (!ret) {
+ kvm_prepare_mmio(run, mmio);
+ kvm_handle_mmio_return(vcpu, run);
+ }
+
+ return !ret;
+}
+
int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
phys_addr_t fault_ipa)
{
@@ -200,6 +230,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
if (vgic_handle_mmio(vcpu, run, &mmio))
return 1;
+ if (handle_kernel_mmio(vcpu, run, &mmio))
+ return 1;
+
kvm_prepare_mmio(run, &mmio);
return 0;
}
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 3/5] KVM: ARM VGIC add kvm_io_bus_ frontend
2014-12-07 9:37 [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Nikolay Nikolaev
2014-12-07 9:37 ` [PATCH v2 1/5] KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks Nikolay Nikolaev
2014-12-07 9:37 ` [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus Nikolay Nikolaev
@ 2014-12-07 9:37 ` Nikolay Nikolaev
2015-01-12 21:41 ` Eric Auger
2014-12-07 9:38 ` [PATCH v2 4/5] ARM/ARM64: enable linking against eventfd Nikolay Nikolaev
` (2 subsequent siblings)
5 siblings, 1 reply; 17+ messages in thread
From: Nikolay Nikolaev @ 2014-12-07 9:37 UTC (permalink / raw)
To: linux-arm-kernel
In io_mem_abort remove the call to vgic_handle_mmio. The target is to have
a single MMIO handling path - that is through the kvm_io_bus_ API.
Register a kvm_io_device in kvm_vgic_init on the whole vGIC MMIO region.
Both read and write calls are redirected to vgic_io_dev_access where
kvm_exit_mmio is composed to pass it to vm_ops.handle_mmio.
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
---
arch/arm/kvm/mmio.c | 3 -
include/kvm/arm_vgic.h | 3 -
virt/kvm/arm/vgic.c | 127 ++++++++++++++++++++++++++++++++++++++++++++----
3 files changed, 118 insertions(+), 15 deletions(-)
diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
index e42469f..bf466c8 100644
--- a/arch/arm/kvm/mmio.c
+++ b/arch/arm/kvm/mmio.c
@@ -227,9 +227,6 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
if (mmio.is_write)
mmio_write_buf(mmio.data, mmio.len, data);
- if (vgic_handle_mmio(vcpu, run, &mmio))
- return 1;
-
if (handle_kernel_mmio(vcpu, run, &mmio))
return 1;
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index e452ef7..d9b7d2a 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -233,6 +233,7 @@ struct vgic_dist {
unsigned long *irq_pending_on_cpu;
struct vgic_vm_ops vm_ops;
+ struct kvm_io_device *io_dev;
#endif
};
@@ -307,8 +308,6 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
bool level);
void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg);
int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
-bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
- struct kvm_exit_mmio *mmio);
#define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel))
#define vgic_initialized(k) ((k)->arch.vgic.ready)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index bd74207..1c7cbec 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -31,6 +31,9 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm.h>
+
+#include "iodev.h"
/*
* How the whole thing works (courtesy of Christoffer Dall):
@@ -776,27 +779,127 @@ bool vgic_handle_mmio_range(struct kvm_vcpu *vcpu, struct kvm_run *run,
}
/**
- * vgic_handle_mmio - handle an in-kernel MMIO access for the GIC emulation
+ * vgic_io_dev_access - handle an in-kernel MMIO access for the GIC emulation
* @vcpu: pointer to the vcpu performing the access
- * @run: pointer to the kvm_run structure
- * @mmio: pointer to the data describing the access
+ * @this: pointer to the kvm_io_device structure
+ * @addr: the MMIO address being accessed
+ * @len: the length of the accessed data
+ * @val: pointer to the value being written,
+ * or where the read operation will store its result
+ * @is_write: flag to show whether a write access is performed
*
- * returns true if the MMIO access has been performed in kernel space,
- * and false if it needs to be emulated in user space.
+ * returns 0 if the MMIO access has been performed in kernel space,
+ * and 1 if it needs to be emulated in user space.
* Calls the actual handling routine for the selected VGIC model.
*/
-bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
- struct kvm_exit_mmio *mmio)
+static int vgic_io_dev_access(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ gpa_t addr, int len, void *val, bool is_write)
{
- if (!irqchip_in_kernel(vcpu->kvm))
- return false;
+ struct kvm_exit_mmio mmio;
+ bool ret;
+
+ mmio = (struct kvm_exit_mmio) {
+ .phys_addr = addr,
+ .len = len,
+ .is_write = is_write,
+ };
+
+ if (is_write)
+ memcpy(mmio.data, val, len);
/*
* This will currently call either vgic_v2_handle_mmio() or
* vgic_v3_handle_mmio(), which in turn will call
* vgic_handle_mmio_range() defined above.
*/
- return vcpu->kvm->arch.vgic.vm_ops.handle_mmio(vcpu, run, mmio);
+ ret = vcpu->kvm->arch.vgic.vm_ops.handle_mmio(vcpu, vcpu->run, &mmio);
+
+ if (!is_write)
+ memcpy(val, mmio.data, len);
+
+ return ret ? 0 : 1;
+}
+
+static int vgic_io_dev_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ gpa_t addr, int len, void *val)
+{
+ return vgic_io_dev_access(vcpu, this, addr, len, val, false);
+}
+
+static int vgic_io_dev_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ gpa_t addr, int len, const void *val)
+{
+ return vgic_io_dev_access(vcpu, this, addr, len, (void *)val, true);
+}
+
+static const struct kvm_io_device_ops vgic_io_dev_ops = {
+ .read = vgic_io_dev_read,
+ .write = vgic_io_dev_write,
+};
+
+static int vgic_register_kvm_io_dev(struct kvm *kvm)
+{
+ int len, ret;
+
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ unsigned long base = dist->vgic_dist_base;
+ u32 type = kvm->arch.vgic.vgic_model;
+ struct kvm_io_device *dev;
+
+ if (IS_VGIC_ADDR_UNDEF(base)) {
+ kvm_err("Need to set vgic distributor address first\n");
+ return -ENXIO;
+ }
+
+ dev = kzalloc(sizeof(struct kvm_io_device), GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+
+ switch (type) {
+ case KVM_DEV_TYPE_ARM_VGIC_V2:
+ len = KVM_VGIC_V2_DIST_SIZE;
+ break;
+#ifdef CONFIG_ARM_GIC_V3
+ case KVM_DEV_TYPE_ARM_VGIC_V3:
+ len = KVM_VGIC_V3_DIST_SIZE;
+ break;
+#endif
+ default:
+ kvm_err("Unsupported VGIC model\n");
+ goto out_free_dev;
+ break;
+ }
+
+ kvm_iodevice_init(dev, &vgic_io_dev_ops);
+
+ mutex_lock(&kvm->slots_lock);
+
+ ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS,
+ base, len, dev);
+ if (ret < 0)
+ goto out_unlock;
+ mutex_unlock(&kvm->slots_lock);
+
+ kvm->arch.vgic.io_dev = dev;
+
+ return 0;
+
+out_unlock:
+ mutex_unlock(&kvm->slots_lock);
+out_free_dev:
+ kfree(dev);
+ return ret;
+}
+
+static void vgic_unregister_kvm_io_dev(struct kvm *kvm)
+{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+
+ if (dist) {
+ kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, dist->io_dev);
+ kfree(dist->io_dev);
+ dist->io_dev = NULL;
+ }
}
static int vgic_nr_shared_irqs(struct vgic_dist *dist)
@@ -1427,6 +1530,8 @@ void kvm_vgic_destroy(struct kvm *kvm)
struct kvm_vcpu *vcpu;
int i;
+ vgic_unregister_kvm_io_dev(kvm);
+
kvm_for_each_vcpu(i, vcpu, kvm)
kvm_vgic_vcpu_destroy(vcpu);
@@ -1548,6 +1653,8 @@ int kvm_vgic_init(struct kvm *kvm)
if (vgic_initialized(kvm))
goto out;
+ vgic_register_kvm_io_dev(kvm);
+
ret = vgic_init_maps(kvm);
if (ret) {
kvm_err("Unable to allocate maps\n");
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 4/5] ARM/ARM64: enable linking against eventfd
2014-12-07 9:37 [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Nikolay Nikolaev
` (2 preceding siblings ...)
2014-12-07 9:37 ` [PATCH v2 3/5] KVM: ARM VGIC add kvm_io_bus_ frontend Nikolay Nikolaev
@ 2014-12-07 9:38 ` Nikolay Nikolaev
2014-12-07 9:38 ` [PATCH v2 5/5] ARM: enable KVM_CAP_IOEVENTFD Nikolay Nikolaev
2015-01-12 21:46 ` [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Eric Auger
5 siblings, 0 replies; 17+ messages in thread
From: Nikolay Nikolaev @ 2014-12-07 9:38 UTC (permalink / raw)
To: linux-arm-kernel
This enables compilation of the eventfd feature on ARM/ARM64.
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
---
arch/arm/kvm/Kconfig | 1 +
arch/arm/kvm/Makefile | 2 +-
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/Makefile | 2 +-
4 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
index 466bd29..a4b0312 100644
--- a/arch/arm/kvm/Kconfig
+++ b/arch/arm/kvm/Kconfig
@@ -20,6 +20,7 @@ config KVM
bool "Kernel-based Virtual Machine (KVM) support"
select PREEMPT_NOTIFIERS
select ANON_INODES
+ select HAVE_KVM_EVENTFD
select HAVE_KVM_CPU_RELAX_INTERCEPT
select KVM_MMIO
select KVM_ARM_HOST
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index 443b8be..539c1a5 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -15,7 +15,7 @@ AFLAGS_init.o := -Wa,-march=armv7-a$(plus_virt)
AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
KVM := ../../../virt/kvm
-kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
+kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o
obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 8ba85e9..cb839d0 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -21,6 +21,7 @@ config KVM
select MMU_NOTIFIER
select PREEMPT_NOTIFIERS
select ANON_INODES
+ select HAVE_KVM_EVENTFD
select HAVE_KVM_CPU_RELAX_INTERCEPT
select KVM_MMIO
select KVM_ARM_HOST
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 4e6e09e..0dffb5f 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -11,7 +11,7 @@ ARM=../../../arch/arm/kvm
obj-$(CONFIG_KVM_ARM_HOST) += kvm.o
-kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(ARM)/arm.o $(ARM)/mmu.o $(ARM)/mmio.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(ARM)/psci.o $(ARM)/perf.o
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 5/5] ARM: enable KVM_CAP_IOEVENTFD
2014-12-07 9:37 [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Nikolay Nikolaev
` (3 preceding siblings ...)
2014-12-07 9:38 ` [PATCH v2 4/5] ARM/ARM64: enable linking against eventfd Nikolay Nikolaev
@ 2014-12-07 9:38 ` Nikolay Nikolaev
2015-01-12 21:46 ` [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Eric Auger
5 siblings, 0 replies; 17+ messages in thread
From: Nikolay Nikolaev @ 2014-12-07 9:38 UTC (permalink / raw)
To: linux-arm-kernel
KVM on arm will support the eventfd extension.
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
---
arch/arm/kvm/arm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index c3d0fbd..266b618 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -197,6 +197,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_MAX_VCPUS:
r = KVM_MAX_VCPUS;
break;
+ case KVM_CAP_IOEVENTFD:
+ r = 1;
+ break;
default:
r = kvm_arch_dev_ioctl_check_extension(ext);
break;
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
2014-12-07 9:37 ` [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus Nikolay Nikolaev
@ 2015-01-12 17:09 ` Eric Auger
2015-01-12 17:48 ` Eric Auger
2015-01-24 1:02 ` Nikolay Nikolaev
0 siblings, 2 replies; 17+ messages in thread
From: Eric Auger @ 2015-01-12 17:09 UTC (permalink / raw)
To: linux-arm-kernel
Hi Nikolay,
On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
> On IO memory abort, try to handle the MMIO access thorugh the KVM
> registered read/write callbacks. This is done by invoking the relevant
> kvm_io_bus_* API.
>
> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> ---
> arch/arm/kvm/mmio.c | 33 +++++++++++++++++++++++++++++++++
> 1 file changed, 33 insertions(+)
>
> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
> index 4cb5a93..e42469f 100644
> --- a/arch/arm/kvm/mmio.c
> +++ b/arch/arm/kvm/mmio.c
> @@ -162,6 +162,36 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> return 0;
> }
>
> +/**
> + * handle_kernel_mmio - handle an in-kernel MMIO access
> + * @vcpu: pointer to the vcpu performing the access
> + * @run: pointer to the kvm_run structure
> + * @mmio: pointer to the data describing the access
> + *
> + * returns true if the MMIO access has been performed in kernel space,
> + * and false if it needs to be emulated in user space.
> + */
> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> + struct kvm_exit_mmio *mmio)
> +{
> + int ret;
> +
> + if (mmio->is_write) {
> + ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
> + mmio->len, &mmio->data);
> +
> + } else {
> + ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
> + mmio->len, &mmio->data);
> + }
> + if (!ret) {
> + kvm_prepare_mmio(run, mmio);
> + kvm_handle_mmio_return(vcpu, run);
> + }
> +
> + return !ret;
in case ret < 0 (-EOPNOTSUPP = -95) aren't we returning true too? return
(ret==0)?
> +}
> +
> int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> phys_addr_t fault_ipa)
> {
> @@ -200,6 +230,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> if (vgic_handle_mmio(vcpu, run, &mmio))
> return 1;
>
> + if (handle_kernel_mmio(vcpu, run, &mmio))
> + return 1;
> +
> kvm_prepare_mmio(run, &mmio);
> return 0;
currently the io_mem_abort returned value is not used by mmu.c code. I
think this should be handed in kvm_handle_guest_abort. What do you think?
Best Regards
Eric
> }
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 1/5] KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks.
2014-12-07 9:37 ` [PATCH v2 1/5] KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks Nikolay Nikolaev
@ 2015-01-12 17:10 ` Eric Auger
0 siblings, 0 replies; 17+ messages in thread
From: Eric Auger @ 2015-01-12 17:10 UTC (permalink / raw)
To: linux-arm-kernel
Hi Nikolay,
On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
> This is needed in e.g. ARM vGIC emulation, where the MMIO handling
> depends on the VCPU that does the access.
>
> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> ---
> arch/ia64/kvm/kvm-ia64.c | 4 ++--
> arch/powerpc/kvm/mpic.c | 10 ++++++----
> arch/powerpc/kvm/powerpc.c | 4 ++--
> arch/s390/kvm/diag.c | 2 +-
> arch/x86/kvm/i8254.c | 14 +++++++++-----
> arch/x86/kvm/i8259.c | 12 ++++++------
> arch/x86/kvm/lapic.c | 4 ++--
> arch/x86/kvm/vmx.c | 2 +-
> arch/x86/kvm/x86.c | 13 +++++++------
> include/linux/kvm_host.h | 10 +++++-----
> virt/kvm/coalesced_mmio.c | 5 +++--
> virt/kvm/eventfd.c | 4 ++--
> virt/kvm/ioapic.c | 8 ++++----
> virt/kvm/iodev.h | 23 +++++++++++++++--------
> virt/kvm/kvm_main.c | 32 ++++++++++++++++----------------
> 15 files changed, 81 insertions(+), 66 deletions(-)
>
> diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
> index dbe46f4..287df8d 100644
> --- a/arch/ia64/kvm/kvm-ia64.c
> +++ b/arch/ia64/kvm/kvm-ia64.c
> @@ -246,10 +246,10 @@ static int handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
> return 0;
> mmio:
> if (p->dir)
> - r = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, p->addr,
> + r = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, p->addr,
> p->size, &p->data);
> else
> - r = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, p->addr,
> + r = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, p->addr,
> p->size, &p->data);
> if (r)
> printk(KERN_ERR"kvm: No iodevice found! addr:%lx\n", p->addr);
> diff --git a/arch/powerpc/kvm/mpic.c b/arch/powerpc/kvm/mpic.c
> index 39b3a8f..8542f07 100644
> --- a/arch/powerpc/kvm/mpic.c
> +++ b/arch/powerpc/kvm/mpic.c
> @@ -1374,8 +1374,9 @@ static int kvm_mpic_write_internal(struct openpic *opp, gpa_t addr, u32 val)
> return -ENXIO;
> }
>
> -static int kvm_mpic_read(struct kvm_io_device *this, gpa_t addr,
> - int len, void *ptr)
> +static int kvm_mpic_read(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this,
> + gpa_t addr, int len, void *ptr)
> {
> struct openpic *opp = container_of(this, struct openpic, mmio);
> int ret;
> @@ -1415,8 +1416,9 @@ static int kvm_mpic_read(struct kvm_io_device *this, gpa_t addr,
> return ret;
> }
>
> -static int kvm_mpic_write(struct kvm_io_device *this, gpa_t addr,
> - int len, const void *ptr)
> +static int kvm_mpic_write(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this,
> + gpa_t addr, int len, const void *ptr)
> {
> struct openpic *opp = container_of(this, struct openpic, mmio);
> int ret;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index c1f8f53..5ac065b 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -814,7 +814,7 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>
> idx = srcu_read_lock(&vcpu->kvm->srcu);
>
> - ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, run->mmio.phys_addr,
> + ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, run->mmio.phys_addr,
> bytes, &run->mmio.data);
>
> srcu_read_unlock(&vcpu->kvm->srcu, idx);
> @@ -887,7 +887,7 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>
> idx = srcu_read_lock(&vcpu->kvm->srcu);
>
> - ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, run->mmio.phys_addr,
> + ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, run->mmio.phys_addr,
> bytes, &run->mmio.data);
>
> srcu_read_unlock(&vcpu->kvm->srcu, idx);
> diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
> index 9254aff..329ec75 100644
> --- a/arch/s390/kvm/diag.c
> +++ b/arch/s390/kvm/diag.c
> @@ -213,7 +213,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu)
> * - gpr 3 contains the virtqueue index (passed as datamatch)
> * - gpr 4 contains the index on the bus (optionally)
> */
> - ret = kvm_io_bus_write_cookie(vcpu->kvm, KVM_VIRTIO_CCW_NOTIFY_BUS,
> + ret = kvm_io_bus_write_cookie(vcpu, KVM_VIRTIO_CCW_NOTIFY_BUS,
> vcpu->run->s.regs.gprs[2] & 0xffffffff,
> 8, &vcpu->run->s.regs.gprs[3],
> vcpu->run->s.regs.gprs[4]);
> diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
> index 298781d..4dce6f8 100644
> --- a/arch/x86/kvm/i8254.c
> +++ b/arch/x86/kvm/i8254.c
> @@ -443,7 +443,8 @@ static inline int pit_in_range(gpa_t addr)
> (addr < KVM_PIT_BASE_ADDRESS + KVM_PIT_MEM_LENGTH));
> }
>
> -static int pit_ioport_write(struct kvm_io_device *this,
> +static int pit_ioport_write(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this,
> gpa_t addr, int len, const void *data)
> {
> struct kvm_pit *pit = dev_to_pit(this);
> @@ -519,7 +520,8 @@ static int pit_ioport_write(struct kvm_io_device *this,
> return 0;
> }
>
> -static int pit_ioport_read(struct kvm_io_device *this,
> +static int pit_ioport_read(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this,
> gpa_t addr, int len, void *data)
> {
> struct kvm_pit *pit = dev_to_pit(this);
> @@ -589,7 +591,8 @@ static int pit_ioport_read(struct kvm_io_device *this,
> return 0;
> }
>
> -static int speaker_ioport_write(struct kvm_io_device *this,
> +static int speaker_ioport_write(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this,
> gpa_t addr, int len, const void *data)
> {
> struct kvm_pit *pit = speaker_to_pit(this);
> @@ -606,8 +609,9 @@ static int speaker_ioport_write(struct kvm_io_device *this,
> return 0;
> }
>
> -static int speaker_ioport_read(struct kvm_io_device *this,
> - gpa_t addr, int len, void *data)
> +static int speaker_ioport_read(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this,
> + gpa_t addr, int len, void *data)
> {
> struct kvm_pit *pit = speaker_to_pit(this);
> struct kvm_kpit_state *pit_state = &pit->pit_state;
> diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
> index cc31f7c..8ff4eaa 100644
> --- a/arch/x86/kvm/i8259.c
> +++ b/arch/x86/kvm/i8259.c
> @@ -528,42 +528,42 @@ static int picdev_read(struct kvm_pic *s,
> return 0;
> }
>
> -static int picdev_master_write(struct kvm_io_device *dev,
> +static int picdev_master_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
> gpa_t addr, int len, const void *val)
> {
> return picdev_write(container_of(dev, struct kvm_pic, dev_master),
> addr, len, val);
> }
>
> -static int picdev_master_read(struct kvm_io_device *dev,
> +static int picdev_master_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
> gpa_t addr, int len, void *val)
> {
> return picdev_read(container_of(dev, struct kvm_pic, dev_master),
> addr, len, val);
> }
>
> -static int picdev_slave_write(struct kvm_io_device *dev,
> +static int picdev_slave_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
> gpa_t addr, int len, const void *val)
> {
> return picdev_write(container_of(dev, struct kvm_pic, dev_slave),
> addr, len, val);
> }
>
> -static int picdev_slave_read(struct kvm_io_device *dev,
> +static int picdev_slave_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
> gpa_t addr, int len, void *val)
> {
> return picdev_read(container_of(dev, struct kvm_pic, dev_slave),
> addr, len, val);
> }
>
> -static int picdev_eclr_write(struct kvm_io_device *dev,
> +static int picdev_eclr_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
> gpa_t addr, int len, const void *val)
> {
> return picdev_write(container_of(dev, struct kvm_pic, dev_eclr),
> addr, len, val);
> }
>
> -static int picdev_eclr_read(struct kvm_io_device *dev,
> +static int picdev_eclr_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev,
> gpa_t addr, int len, void *val)
> {
> return picdev_read(container_of(dev, struct kvm_pic, dev_eclr),
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index b8345dd..c071c36 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -1007,7 +1007,7 @@ static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)
> addr < apic->base_address + LAPIC_MMIO_LENGTH;
> }
>
> -static int apic_mmio_read(struct kvm_io_device *this,
> +static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
> gpa_t address, int len, void *data)
> {
> struct kvm_lapic *apic = to_lapic(this);
> @@ -1253,7 +1253,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
> return ret;
> }
>
> -static int apic_mmio_write(struct kvm_io_device *this,
> +static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
> gpa_t address, int len, const void *data)
> {
> struct kvm_lapic *apic = to_lapic(this);
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 3e556c6..e6d9f01 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -5620,7 +5620,7 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
> gpa_t gpa;
>
> gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> - if (!kvm_io_bus_write(vcpu->kvm, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> + if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> skip_emulated_instruction(vcpu);
> return 1;
> }
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 0033df3..e0462db 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4052,8 +4052,8 @@ static int vcpu_mmio_write(struct kvm_vcpu *vcpu, gpa_t addr, int len,
> do {
> n = min(len, 8);
> if (!(vcpu->arch.apic &&
> - !kvm_iodevice_write(&vcpu->arch.apic->dev, addr, n, v))
> - && kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, addr, n, v))
> + !kvm_iodevice_write(vcpu, &vcpu->arch.apic->dev, addr, n, v))
> + && kvm_io_bus_write(vcpu, KVM_MMIO_BUS, addr, n, v))
> break;
> handled += n;
> addr += n;
> @@ -4072,8 +4072,9 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v)
> do {
> n = min(len, 8);
> if (!(vcpu->arch.apic &&
> - !kvm_iodevice_read(&vcpu->arch.apic->dev, addr, n, v))
> - && kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, addr, n, v))
> + !kvm_iodevice_read(vcpu, &vcpu->arch.apic->dev,
> + addr, n, v))
> + && kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, n, v))
> break;
> trace_kvm_mmio(KVM_TRACE_MMIO_READ, n, addr, *(u64 *)v);
> handled += n;
> @@ -4565,10 +4566,10 @@ static int kernel_pio(struct kvm_vcpu *vcpu, void *pd)
> int r;
>
> if (vcpu->arch.pio.in)
> - r = kvm_io_bus_read(vcpu->kvm, KVM_PIO_BUS, vcpu->arch.pio.port,
> + r = kvm_io_bus_read(vcpu, KVM_PIO_BUS, vcpu->arch.pio.port,
> vcpu->arch.pio.size, pd);
> else
> - r = kvm_io_bus_write(vcpu->kvm, KVM_PIO_BUS,
> + r = kvm_io_bus_write(vcpu, KVM_PIO_BUS,
> vcpu->arch.pio.port, vcpu->arch.pio.size,
> pd);
> return r;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index b1aa88d..f0d1ce7 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -168,12 +168,12 @@ enum kvm_bus {
> KVM_NR_BUSES
> };
>
> -int kvm_io_bus_write(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> +int kvm_io_bus_write(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
> int len, const void *val);
> -int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> - int len, const void *val, long cookie);
> -int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr, int len,
> - void *val);
> +int kvm_io_bus_write_cookie(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx,
> + gpa_t addr, int len, const void *val, long cookie);
> +int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
> + int len, void *val);
> int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> int len, struct kvm_io_device *dev);
> int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
> diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
> index 00d8642..c831a40 100644
> --- a/virt/kvm/coalesced_mmio.c
> +++ b/virt/kvm/coalesced_mmio.c
> @@ -60,8 +60,9 @@ static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev)
> return 1;
> }
>
> -static int coalesced_mmio_write(struct kvm_io_device *this,
> - gpa_t addr, int len, const void *val)
> +static int coalesced_mmio_write(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this, gpa_t addr,
> + int len, const void *val)
> {
> struct kvm_coalesced_mmio_dev *dev = to_mmio(this);
> struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring;
> diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
> index b0fb390..7202039 100644
> --- a/virt/kvm/eventfd.c
> +++ b/virt/kvm/eventfd.c
> @@ -719,8 +719,8 @@ ioeventfd_in_range(struct _ioeventfd *p, gpa_t addr, int len, const void *val)
>
> /* MMIO/PIO writes trigger an event if the addr/val match */
> static int
> -ioeventfd_write(struct kvm_io_device *this, gpa_t addr, int len,
> - const void *val)
> +ioeventfd_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this, gpa_t addr,
> + int len, const void *val)
> {
> struct _ioeventfd *p = to_ioeventfd(this);
>
> diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
> index 0ba4057..a032909 100644
> --- a/virt/kvm/ioapic.c
> +++ b/virt/kvm/ioapic.c
> @@ -505,8 +505,8 @@ static inline int ioapic_in_range(struct kvm_ioapic *ioapic, gpa_t addr)
> (addr < ioapic->base_address + IOAPIC_MEM_LENGTH)));
> }
>
> -static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len,
> - void *val)
> +static int ioapic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
> + gpa_t addr, int len, void *val)
> {
> struct kvm_ioapic *ioapic = to_ioapic(this);
> u32 result;
> @@ -548,8 +548,8 @@ static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len,
> return 0;
> }
>
> -static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len,
> - const void *val)
> +static int ioapic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
> + gpa_t addr, int len, const void *val)
> {
> struct kvm_ioapic *ioapic = to_ioapic(this);
> u32 data;
> diff --git a/virt/kvm/iodev.h b/virt/kvm/iodev.h
> index 12fd3ca..7dd88cb 100644
> --- a/virt/kvm/iodev.h
> +++ b/virt/kvm/iodev.h
> @@ -20,6 +20,7 @@
> #include <asm/errno.h>
>
> struct kvm_io_device;
> +struct kvm_vcpu;
>
> /**
> * kvm_io_device_ops are called under kvm slots_lock.
> @@ -27,11 +28,13 @@ struct kvm_io_device;
> * or non-zero to have it passed to the next device.
> **/
> struct kvm_io_device_ops {
> - int (*read)(struct kvm_io_device *this,
> + int (*read)(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this,
strange alignment
> gpa_t addr,
> int len,
> void *val);
> - int (*write)(struct kvm_io_device *this,
> + int (*write)(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *this,
same
Besides this minor thing, looks ok to me.
Best Regards
Eric
> gpa_t addr,
> int len,
> const void *val);
> @@ -49,16 +52,20 @@ static inline void kvm_iodevice_init(struct kvm_io_device *dev,
> dev->ops = ops;
> }
>
> -static inline int kvm_iodevice_read(struct kvm_io_device *dev,
> - gpa_t addr, int l, void *v)
> +static inline int kvm_iodevice_read(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *dev, gpa_t addr,
> + int l, void *v)
> {
> - return dev->ops->read ? dev->ops->read(dev, addr, l, v) : -EOPNOTSUPP;
> + return dev->ops->read ? dev->ops->read(vcpu, dev, addr, l, v)
> + : -EOPNOTSUPP;
> }
>
> -static inline int kvm_iodevice_write(struct kvm_io_device *dev,
> - gpa_t addr, int l, const void *v)
> +static inline int kvm_iodevice_write(struct kvm_vcpu *vcpu,
> + struct kvm_io_device *dev, gpa_t addr,
> + int l, const void *v)
> {
> - return dev->ops->write ? dev->ops->write(dev, addr, l, v) : -EOPNOTSUPP;
> + return dev->ops->write ? dev->ops->write(vcpu, dev, addr, l, v)
> + : -EOPNOTSUPP;
> }
>
> static inline void kvm_iodevice_destructor(struct kvm_io_device *dev)
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 3cee7b1..16e5152 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2924,7 +2924,7 @@ static int kvm_io_bus_get_first_dev(struct kvm_io_bus *bus,
> return off;
> }
>
> -static int __kvm_io_bus_write(struct kvm_io_bus *bus,
> +static int __kvm_io_bus_write(struct kvm_vcpu *vcpu, struct kvm_io_bus *bus,
> struct kvm_io_range *range, const void *val)
> {
> int idx;
> @@ -2935,7 +2935,7 @@ static int __kvm_io_bus_write(struct kvm_io_bus *bus,
>
> while (idx < bus->dev_count &&
> kvm_io_bus_cmp(range, &bus->range[idx]) == 0) {
> - if (!kvm_iodevice_write(bus->range[idx].dev, range->addr,
> + if (!kvm_iodevice_write(vcpu, bus->range[idx].dev, range->addr,
> range->len, val))
> return idx;
> idx++;
> @@ -2945,7 +2945,7 @@ static int __kvm_io_bus_write(struct kvm_io_bus *bus,
> }
>
> /* kvm_io_bus_write - called under kvm->slots_lock */
> -int kvm_io_bus_write(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> +int kvm_io_bus_write(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
> int len, const void *val)
> {
> struct kvm_io_bus *bus;
> @@ -2957,14 +2957,14 @@ int kvm_io_bus_write(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> .len = len,
> };
>
> - bus = srcu_dereference(kvm->buses[bus_idx], &kvm->srcu);
> - r = __kvm_io_bus_write(bus, &range, val);
> + bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &kvm->srcu);
> + r = __kvm_io_bus_write(vcpu, bus, &range, val);
> return r < 0 ? r : 0;
> }
>
> /* kvm_io_bus_write_cookie - called under kvm->slots_lock */
> -int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> - int len, const void *val, long cookie)
> +int kvm_io_bus_write_cookie(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx,
> + gpa_t addr, int len, const void *val, long cookie)
> {
> struct kvm_io_bus *bus;
> struct kvm_io_range range;
> @@ -2974,12 +2974,12 @@ int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> .len = len,
> };
>
> - bus = srcu_dereference(kvm->buses[bus_idx], &kvm->srcu);
> + bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &kvm->srcu);
>
> /* First try the device referenced by cookie. */
> if ((cookie >= 0) && (cookie < bus->dev_count) &&
> (kvm_io_bus_cmp(&range, &bus->range[cookie]) == 0))
> - if (!kvm_iodevice_write(bus->range[cookie].dev, addr, len,
> + if (!kvm_iodevice_write(vcpu, bus->range[cookie].dev, addr, len,
> val))
> return cookie;
>
> @@ -2987,11 +2987,11 @@ int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> * cookie contained garbage; fall back to search and return the
> * correct cookie value.
> */
> - return __kvm_io_bus_write(bus, &range, val);
> + return __kvm_io_bus_write(vcpu, bus, &range, val);
> }
>
> -static int __kvm_io_bus_read(struct kvm_io_bus *bus, struct kvm_io_range *range,
> - void *val)
> +static int __kvm_io_bus_read(struct kvm_vcpu *vcpu, struct kvm_io_bus *bus,
> + struct kvm_io_range *range, void *val)
> {
> int idx;
>
> @@ -3001,7 +3001,7 @@ static int __kvm_io_bus_read(struct kvm_io_bus *bus, struct kvm_io_range *range,
>
> while (idx < bus->dev_count &&
> kvm_io_bus_cmp(range, &bus->range[idx]) == 0) {
> - if (!kvm_iodevice_read(bus->range[idx].dev, range->addr,
> + if (!kvm_iodevice_read(vcpu, bus->range[idx].dev, range->addr,
> range->len, val))
> return idx;
> idx++;
> @@ -3012,7 +3012,7 @@ static int __kvm_io_bus_read(struct kvm_io_bus *bus, struct kvm_io_range *range,
> EXPORT_SYMBOL_GPL(kvm_io_bus_write);
>
> /* kvm_io_bus_read - called under kvm->slots_lock */
> -int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> +int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
> int len, void *val)
> {
> struct kvm_io_bus *bus;
> @@ -3024,8 +3024,8 @@ int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
> .len = len,
> };
>
> - bus = srcu_dereference(kvm->buses[bus_idx], &kvm->srcu);
> - r = __kvm_io_bus_read(bus, &range, val);
> + bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &kvm->srcu);
> + r = __kvm_io_bus_read(vcpu, bus, &range, val);
> return r < 0 ? r : 0;
> }
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
2015-01-12 17:09 ` Eric Auger
@ 2015-01-12 17:48 ` Eric Auger
2015-01-24 1:02 ` Nikolay Nikolaev
1 sibling, 0 replies; 17+ messages in thread
From: Eric Auger @ 2015-01-12 17:48 UTC (permalink / raw)
To: linux-arm-kernel
On 01/12/2015 06:09 PM, Eric Auger wrote:
> Hi Nikolay,
> On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
>> On IO memory abort, try to handle the MMIO access thorugh the KVM
>> registered read/write callbacks. This is done by invoking the relevant
>> kvm_io_bus_* API.
>>
>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>> ---
>> arch/arm/kvm/mmio.c | 33 +++++++++++++++++++++++++++++++++
>> 1 file changed, 33 insertions(+)
>>
>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>> index 4cb5a93..e42469f 100644
>> --- a/arch/arm/kvm/mmio.c
>> +++ b/arch/arm/kvm/mmio.c
>> @@ -162,6 +162,36 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>> return 0;
>> }
>>
>> +/**
>> + * handle_kernel_mmio - handle an in-kernel MMIO access
>> + * @vcpu: pointer to the vcpu performing the access
>> + * @run: pointer to the kvm_run structure
>> + * @mmio: pointer to the data describing the access
>> + *
>> + * returns true if the MMIO access has been performed in kernel space,
>> + * and false if it needs to be emulated in user space.
>> + */
>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> + struct kvm_exit_mmio *mmio)
>> +{
>> + int ret;
>> +
>> + if (mmio->is_write) {
>> + ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
>> + mmio->len, &mmio->data);
>> +
>> + } else {
>> + ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
>> + mmio->len, &mmio->data);
>> + }
>> + if (!ret) {
>> + kvm_prepare_mmio(run, mmio);
>> + kvm_handle_mmio_return(vcpu, run);
>> + }
>> +
>> + return !ret;
> in case ret < 0 (-EOPNOTSUPP = -95) aren't we returning true too? return
> (ret==0)?
Please forget that comment ;-)
Eric
>
>> +}
>> +
>> int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> phys_addr_t fault_ipa)
>> {
>> @@ -200,6 +230,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> if (vgic_handle_mmio(vcpu, run, &mmio))
>> return 1;
>>
>> + if (handle_kernel_mmio(vcpu, run, &mmio))
>> + return 1;
>> +
>> kvm_prepare_mmio(run, &mmio);
>> return 0;
> currently the io_mem_abort returned value is not used by mmu.c code. I
> think this should be handed in kvm_handle_guest_abort. What do you think?
>
> Best Regards
>
> Eric
>> }
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 3/5] KVM: ARM VGIC add kvm_io_bus_ frontend
2014-12-07 9:37 ` [PATCH v2 3/5] KVM: ARM VGIC add kvm_io_bus_ frontend Nikolay Nikolaev
@ 2015-01-12 21:41 ` Eric Auger
2015-01-24 0:57 ` Nikolay Nikolaev
0 siblings, 1 reply; 17+ messages in thread
From: Eric Auger @ 2015-01-12 21:41 UTC (permalink / raw)
To: linux-arm-kernel
On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
> In io_mem_abort remove the call to vgic_handle_mmio. The target is to have
> a single MMIO handling path - that is through the kvm_io_bus_ API.
>
> Register a kvm_io_device in kvm_vgic_init on the whole vGIC MMIO region.
> Both read and write calls are redirected to vgic_io_dev_access where
> kvm_exit_mmio is composed to pass it to vm_ops.handle_mmio.
>
>
> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> ---
> arch/arm/kvm/mmio.c | 3 -
> include/kvm/arm_vgic.h | 3 -
> virt/kvm/arm/vgic.c | 127 ++++++++++++++++++++++++++++++++++++++++++++----
> 3 files changed, 118 insertions(+), 15 deletions(-)
>
> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
> index e42469f..bf466c8 100644
> --- a/arch/arm/kvm/mmio.c
> +++ b/arch/arm/kvm/mmio.c
> @@ -227,9 +227,6 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> if (mmio.is_write)
> mmio_write_buf(mmio.data, mmio.len, data);
>
> - if (vgic_handle_mmio(vcpu, run, &mmio))
> - return 1;
> -
> if (handle_kernel_mmio(vcpu, run, &mmio))
> return 1;
>
> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> index e452ef7..d9b7d2a 100644
> --- a/include/kvm/arm_vgic.h
> +++ b/include/kvm/arm_vgic.h
> @@ -233,6 +233,7 @@ struct vgic_dist {
> unsigned long *irq_pending_on_cpu;
>
> struct vgic_vm_ops vm_ops;
> + struct kvm_io_device *io_dev;
> #endif
> };
>
> @@ -307,8 +308,6 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
> bool level);
> void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg);
> int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
> -bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> - struct kvm_exit_mmio *mmio);
>
> #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel))
> #define vgic_initialized(k) ((k)->arch.vgic.ready)
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index bd74207..1c7cbec 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -31,6 +31,9 @@
> #include <asm/kvm_emulate.h>
> #include <asm/kvm_arm.h>
> #include <asm/kvm_mmu.h>
> +#include <asm/kvm.h>
> +
> +#include "iodev.h"
>
> /*
> * How the whole thing works (courtesy of Christoffer Dall):
> @@ -776,27 +779,127 @@ bool vgic_handle_mmio_range(struct kvm_vcpu *vcpu, struct kvm_run *run,
> }
>
> /**
> - * vgic_handle_mmio - handle an in-kernel MMIO access for the GIC emulation
> + * vgic_io_dev_access - handle an in-kernel MMIO access for the GIC emulation
> * @vcpu: pointer to the vcpu performing the access
> - * @run: pointer to the kvm_run structure
> - * @mmio: pointer to the data describing the access
> + * @this: pointer to the kvm_io_device structure
> + * @addr: the MMIO address being accessed
> + * @len: the length of the accessed data
> + * @val: pointer to the value being written,
> + * or where the read operation will store its result
> + * @is_write: flag to show whether a write access is performed
> *
> - * returns true if the MMIO access has been performed in kernel space,
> - * and false if it needs to be emulated in user space.
> + * returns 0 if the MMIO access has been performed in kernel space,
> + * and 1 if it needs to be emulated in user space.
> * Calls the actual handling routine for the selected VGIC model.
> */
> -bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> - struct kvm_exit_mmio *mmio)
> +static int vgic_io_dev_access(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
> + gpa_t addr, int len, void *val, bool is_write)
> {
> - if (!irqchip_in_kernel(vcpu->kvm))
> - return false;
> + struct kvm_exit_mmio mmio;
> + bool ret;
> +
> + mmio = (struct kvm_exit_mmio) {
> + .phys_addr = addr,
> + .len = len,
> + .is_write = is_write,
> + };
> +
> + if (is_write)
> + memcpy(mmio.data, val, len);
>
> /*
> * This will currently call either vgic_v2_handle_mmio() or
> * vgic_v3_handle_mmio(), which in turn will call
> * vgic_handle_mmio_range() defined above.
> */
> - return vcpu->kvm->arch.vgic.vm_ops.handle_mmio(vcpu, run, mmio);
> + ret = vcpu->kvm->arch.vgic.vm_ops.handle_mmio(vcpu, vcpu->run, &mmio);
> +
> + if (!is_write)
> + memcpy(val, mmio.data, len);
> +
> + return ret ? 0 : 1;
> +}
> +
> +static int vgic_io_dev_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
> + gpa_t addr, int len, void *val)
> +{
> + return vgic_io_dev_access(vcpu, this, addr, len, val, false);
> +}
> +
> +static int vgic_io_dev_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
> + gpa_t addr, int len, const void *val)
> +{
> + return vgic_io_dev_access(vcpu, this, addr, len, (void *)val, true);
> +}
> +
> +static const struct kvm_io_device_ops vgic_io_dev_ops = {
> + .read = vgic_io_dev_read,
> + .write = vgic_io_dev_write,
> +};
> +
> +static int vgic_register_kvm_io_dev(struct kvm *kvm)
> +{
> + int len, ret;
> +
> + struct vgic_dist *dist = &kvm->arch.vgic;
> + unsigned long base = dist->vgic_dist_base;
> + u32 type = kvm->arch.vgic.vgic_model;
> + struct kvm_io_device *dev;
> +
> + if (IS_VGIC_ADDR_UNDEF(base)) {
> + kvm_err("Need to set vgic distributor address first\n");
> + return -ENXIO;
> + }
> +
> + dev = kzalloc(sizeof(struct kvm_io_device), GFP_KERNEL);
> + if (!dev)
> + return -ENOMEM;
what was the outcome of the dynamic/static allocation discussion?
> +
> + switch (type) {
> + case KVM_DEV_TYPE_ARM_VGIC_V2:
> + len = KVM_VGIC_V2_DIST_SIZE;
> + break;
> +#ifdef CONFIG_ARM_GIC_V3
> + case KVM_DEV_TYPE_ARM_VGIC_V3:
> + len = KVM_VGIC_V3_DIST_SIZE;
> + break;
> +#endif
> + default:
> + kvm_err("Unsupported VGIC model\n");
> + goto out_free_dev;
> + break;
may be removed
> + }
> +
> + kvm_iodevice_init(dev, &vgic_io_dev_ops);
> +
> + mutex_lock(&kvm->slots_lock);
> +
> + ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS,
> + base, len, dev);
> + if (ret < 0)
> + goto out_unlock;
> + mutex_unlock(&kvm->slots_lock);
> +
> + kvm->arch.vgic.io_dev = dev;
> +
> + return 0;
> +
> +out_unlock:
> + mutex_unlock(&kvm->slots_lock);
> +out_free_dev:
> + kfree(dev);
> + return ret;
> +}
> +
> +static void vgic_unregister_kvm_io_dev(struct kvm *kvm)
> +{
> + struct vgic_dist *dist = &kvm->arch.vgic;
> +
> + if (dist) {
> + kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, dist->io_dev);
> + kfree(dist->io_dev);
> + dist->io_dev = NULL;
could be put in a destructor function but not sure it is worth the candle.
> + }
> }
>
> static int vgic_nr_shared_irqs(struct vgic_dist *dist)
> @@ -1427,6 +1530,8 @@ void kvm_vgic_destroy(struct kvm *kvm)
> struct kvm_vcpu *vcpu;
> int i;
>
> + vgic_unregister_kvm_io_dev(kvm);
> +
> kvm_for_each_vcpu(i, vcpu, kvm)
> kvm_vgic_vcpu_destroy(vcpu);
>
> @@ -1548,6 +1653,8 @@ int kvm_vgic_init(struct kvm *kvm)
> if (vgic_initialized(kvm))
> goto out;
>
> + vgic_register_kvm_io_dev(kvm);
> +
should happen in kvm_vgic_map_resources now after rebase on
Christoffer's series.
Best Regards
Eric
> ret = vgic_init_maps(kvm);
> if (ret) {
> kvm_err("Unable to allocate maps\n");
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
2014-12-07 9:37 [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Nikolay Nikolaev
` (4 preceding siblings ...)
2014-12-07 9:38 ` [PATCH v2 5/5] ARM: enable KVM_CAP_IOEVENTFD Nikolay Nikolaev
@ 2015-01-12 21:46 ` Eric Auger
[not found] ` <CADDJ2=M2UjsV0U9cFiRoKghWSckWje+h6M-XQ-0dqPrH3BXp1A@mail.gmail.com>
5 siblings, 1 reply; 17+ messages in thread
From: Eric Auger @ 2015-01-12 21:46 UTC (permalink / raw)
To: linux-arm-kernel
Hi Nikolay,
looks good to me overall. A rebase on Christoffer's vgic init series and
Andre's v6 series will help in reviewing & testing.
Best Regards
Eric
On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
> The IOEVENTFD KVM capability is a prerequisite for vhost support.
>
> This series enables the ioeventfd KVM capability on ARM.
>
> The implementation routes MMIO access in the IO abort handler to the KVM IO bus.
> If there is already a registered ioeventfd handler for this address, the file
> descriptor will be triggered.
>
> We extended the KVM IO bus API to expose the VCPU struct pointer. Now the VGIC
> MMIO access is done through this API. For this to operate the VGIC registers a
> kvm_io_device which represents the whole dist MMIO region.
>
> The patches are implemented on top of the latest Andre's vGICv3 work from here:
> http://www.linux-arm.org/git?p=linux-ap.git;a=shortlog;h=refs/heads/kvm-gicv3/v4
>
> The code was tested on Dual Cortex-A15 Exynos5250 (ARM Chromebook).
> ARM64 build was verified, but not run on actual HW.
>
> Changes since v1:
> - fixed x86 compilation
> - GICv3/GICv3 dist base selection
> - added vgic_unregister_kvm_io_dev to free the iodev resources
> - enable eventfd on ARM64
>
> ---
>
> Nikolay Nikolaev (5):
> KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks.
> KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
> KVM: ARM VGIC add kvm_io_bus_ frontend
> ARM/ARM64: enable linking against eventfd
> ARM: enable KVM_CAP_IOEVENTFD
>
>
> arch/arm/kvm/Kconfig | 1
> arch/arm/kvm/Makefile | 2 -
> arch/arm/kvm/arm.c | 3 +
> arch/arm/kvm/mmio.c | 32 +++++++++++
> arch/arm64/kvm/Kconfig | 1
> arch/arm64/kvm/Makefile | 2 -
> arch/ia64/kvm/kvm-ia64.c | 4 +
> arch/powerpc/kvm/mpic.c | 10 ++-
> arch/powerpc/kvm/powerpc.c | 4 +
> arch/s390/kvm/diag.c | 2 -
> arch/x86/kvm/i8254.c | 14 +++--
> arch/x86/kvm/i8259.c | 12 ++--
> arch/x86/kvm/lapic.c | 4 +
> arch/x86/kvm/vmx.c | 2 -
> arch/x86/kvm/x86.c | 13 ++---
> include/kvm/arm_vgic.h | 3 -
> include/linux/kvm_host.h | 10 ++-
> virt/kvm/arm/vgic.c | 127 +++++++++++++++++++++++++++++++++++++++++---
> virt/kvm/coalesced_mmio.c | 5 +-
> virt/kvm/eventfd.c | 4 +
> virt/kvm/ioapic.c | 8 +--
> virt/kvm/iodev.h | 23 +++++---
> virt/kvm/kvm_main.c | 32 ++++++-----
> 23 files changed, 237 insertions(+), 81 deletions(-)
>
> --
> Signature
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
[not found] ` <CADDJ2=M2UjsV0U9cFiRoKghWSckWje+h6M-XQ-0dqPrH3BXp1A@mail.gmail.com>
@ 2015-01-15 15:31 ` Eric Auger
2015-01-15 19:47 ` Christoffer Dall
0 siblings, 1 reply; 17+ messages in thread
From: Eric Auger @ 2015-01-15 15:31 UTC (permalink / raw)
To: linux-arm-kernel
On 01/15/2015 04:25 PM, Nikolay Nikolaev wrote:
>
>
> On Mon, Jan 12, 2015 at 11:46 PM, Eric Auger <eric.auger@linaro.org
> <mailto:eric.auger@linaro.org>> wrote:
>
> Hi Nikolay,
>
> looks good to me overall. A rebase on Christoffer's vgic init series
>
> is there a tree where these patches are published?
Hi Nikolay,
Yes Andre's kvm-gicv3/v7 branch on https://github.com/apritzel/linux.git
seems to contain all those.
Best Regards
Eric
>
> regards,
> Nikolay Nikolaev
>
> and
> Andre's v6 series will help in reviewing & testing.
>
> Best Regards
>
> Eric
>
> On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
> > The IOEVENTFD KVM capability is a prerequisite for vhost support.
> >
> > This series enables the ioeventfd KVM capability on ARM.
> >
> > The implementation routes MMIO access in the IO abort handler to
> the KVM IO bus.
> > If there is already a registered ioeventfd handler for this
> address, the file
> > descriptor will be triggered.
> >
> > We extended the KVM IO bus API to expose the VCPU struct pointer.
> Now the VGIC
> > MMIO access is done through this API. For this to operate the VGIC
> registers a
> > kvm_io_device which represents the whole dist MMIO region.
> >
> > The patches are implemented on top of the latest Andre's vGICv3
> work from here:
> >
> http://www.linux-arm.org/git?p=linux-ap.git;a=shortlog;h=refs/heads/kvm-gicv3/v4
> >
> > The code was tested on Dual Cortex-A15 Exynos5250 (ARM Chromebook).
> > ARM64 build was verified, but not run on actual HW.
> >
> > Changes since v1:
> > - fixed x86 compilation
> > - GICv3/GICv3 dist base selection
> > - added vgic_unregister_kvm_io_dev to free the iodev resources
> > - enable eventfd on ARM64
> >
> > ---
> >
> > Nikolay Nikolaev (5):
> > KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the
> callbacks.
> > KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
> > KVM: ARM VGIC add kvm_io_bus_ frontend
> > ARM/ARM64: enable linking against eventfd
> > ARM: enable KVM_CAP_IOEVENTFD
> >
> >
> > arch/arm/kvm/Kconfig | 1
> > arch/arm/kvm/Makefile | 2 -
> > arch/arm/kvm/arm.c | 3 +
> > arch/arm/kvm/mmio.c | 32 +++++++++++
> > arch/arm64/kvm/Kconfig | 1
> > arch/arm64/kvm/Makefile | 2 -
> > arch/ia64/kvm/kvm-ia64.c | 4 +
> > arch/powerpc/kvm/mpic.c | 10 ++-
> > arch/powerpc/kvm/powerpc.c | 4 +
> > arch/s390/kvm/diag.c | 2 -
> > arch/x86/kvm/i8254.c | 14 +++--
> > arch/x86/kvm/i8259.c | 12 ++--
> > arch/x86/kvm/lapic.c | 4 +
> > arch/x86/kvm/vmx.c | 2 -
> > arch/x86/kvm/x86.c | 13 ++---
> > include/kvm/arm_vgic.h | 3 -
> > include/linux/kvm_host.h | 10 ++-
> > virt/kvm/arm/vgic.c | 127
> +++++++++++++++++++++++++++++++++++++++++---
> > virt/kvm/coalesced_mmio.c | 5 +-
> > virt/kvm/eventfd.c | 4 +
> > virt/kvm/ioapic.c | 8 +--
> > virt/kvm/iodev.h | 23 +++++---
> > virt/kvm/kvm_main.c | 32 ++++++-----
> > 23 files changed, 237 insertions(+), 81 deletions(-)
> >
> > --
> > Signature
> >
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
2015-01-15 15:31 ` Eric Auger
@ 2015-01-15 19:47 ` Christoffer Dall
0 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2015-01-15 19:47 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jan 15, 2015 at 04:31:55PM +0100, Eric Auger wrote:
> On 01/15/2015 04:25 PM, Nikolay Nikolaev wrote:
> >
> >
> > On Mon, Jan 12, 2015 at 11:46 PM, Eric Auger <eric.auger@linaro.org
> > <mailto:eric.auger@linaro.org>> wrote:
> >
> > Hi Nikolay,
> >
> > looks good to me overall. A rebase on Christoffer's vgic init series
> >
> > is there a tree where these patches are published?
> Hi Nikolay,
>
> Yes Andre's kvm-gicv3/v7 branch on https://github.com/apritzel/linux.git
> seems to contain all those.
>
Probably be good to wait until these are in kvmarm/next along with the
dirty page logging stuff and rebase on that.
-Christoffer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 3/5] KVM: ARM VGIC add kvm_io_bus_ frontend
2015-01-12 21:41 ` Eric Auger
@ 2015-01-24 0:57 ` Nikolay Nikolaev
0 siblings, 0 replies; 17+ messages in thread
From: Nikolay Nikolaev @ 2015-01-24 0:57 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Jan 12, 2015 at 11:41 PM, Eric Auger <eric.auger@linaro.org> wrote:
> On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
>> In io_mem_abort remove the call to vgic_handle_mmio. The target is to have
>> a single MMIO handling path - that is through the kvm_io_bus_ API.
>>
>> Register a kvm_io_device in kvm_vgic_init on the whole vGIC MMIO region.
>> Both read and write calls are redirected to vgic_io_dev_access where
>> kvm_exit_mmio is composed to pass it to vm_ops.handle_mmio.
>>
>>
>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>> ---
>> arch/arm/kvm/mmio.c | 3 -
>> include/kvm/arm_vgic.h | 3 -
>> virt/kvm/arm/vgic.c | 127 ++++++++++++++++++++++++++++++++++++++++++++----
>> 3 files changed, 118 insertions(+), 15 deletions(-)
>>
>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>> index e42469f..bf466c8 100644
>> --- a/arch/arm/kvm/mmio.c
>> +++ b/arch/arm/kvm/mmio.c
>> @@ -227,9 +227,6 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> if (mmio.is_write)
>> mmio_write_buf(mmio.data, mmio.len, data);
>>
>> - if (vgic_handle_mmio(vcpu, run, &mmio))
>> - return 1;
>> -
>> if (handle_kernel_mmio(vcpu, run, &mmio))
>> return 1;
>>
>> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
>> index e452ef7..d9b7d2a 100644
>> --- a/include/kvm/arm_vgic.h
>> +++ b/include/kvm/arm_vgic.h
>> @@ -233,6 +233,7 @@ struct vgic_dist {
>> unsigned long *irq_pending_on_cpu;
>>
>> struct vgic_vm_ops vm_ops;
>> + struct kvm_io_device *io_dev;
>> #endif
>> };
>>
>> @@ -307,8 +308,6 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
>> bool level);
>> void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg);
>> int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
>> -bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> - struct kvm_exit_mmio *mmio);
>>
>> #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel))
>> #define vgic_initialized(k) ((k)->arch.vgic.ready)
>> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
>> index bd74207..1c7cbec 100644
>> --- a/virt/kvm/arm/vgic.c
>> +++ b/virt/kvm/arm/vgic.c
>> @@ -31,6 +31,9 @@
>> #include <asm/kvm_emulate.h>
>> #include <asm/kvm_arm.h>
>> #include <asm/kvm_mmu.h>
>> +#include <asm/kvm.h>
>> +
>> +#include "iodev.h"
>>
>> /*
>> * How the whole thing works (courtesy of Christoffer Dall):
>> @@ -776,27 +779,127 @@ bool vgic_handle_mmio_range(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> }
>>
>> /**
>> - * vgic_handle_mmio - handle an in-kernel MMIO access for the GIC emulation
>> + * vgic_io_dev_access - handle an in-kernel MMIO access for the GIC emulation
>> * @vcpu: pointer to the vcpu performing the access
>> - * @run: pointer to the kvm_run structure
>> - * @mmio: pointer to the data describing the access
>> + * @this: pointer to the kvm_io_device structure
>> + * @addr: the MMIO address being accessed
>> + * @len: the length of the accessed data
>> + * @val: pointer to the value being written,
>> + * or where the read operation will store its result
>> + * @is_write: flag to show whether a write access is performed
>> *
>> - * returns true if the MMIO access has been performed in kernel space,
>> - * and false if it needs to be emulated in user space.
>> + * returns 0 if the MMIO access has been performed in kernel space,
>> + * and 1 if it needs to be emulated in user space.
>> * Calls the actual handling routine for the selected VGIC model.
>> */
>> -bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> - struct kvm_exit_mmio *mmio)
>> +static int vgic_io_dev_access(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
>> + gpa_t addr, int len, void *val, bool is_write)
>> {
>> - if (!irqchip_in_kernel(vcpu->kvm))
>> - return false;
>> + struct kvm_exit_mmio mmio;
>> + bool ret;
>> +
>> + mmio = (struct kvm_exit_mmio) {
>> + .phys_addr = addr,
>> + .len = len,
>> + .is_write = is_write,
>> + };
>> +
>> + if (is_write)
>> + memcpy(mmio.data, val, len);
>>
>> /*
>> * This will currently call either vgic_v2_handle_mmio() or
>> * vgic_v3_handle_mmio(), which in turn will call
>> * vgic_handle_mmio_range() defined above.
>> */
>> - return vcpu->kvm->arch.vgic.vm_ops.handle_mmio(vcpu, run, mmio);
>> + ret = vcpu->kvm->arch.vgic.vm_ops.handle_mmio(vcpu, vcpu->run, &mmio);
>> +
>> + if (!is_write)
>> + memcpy(val, mmio.data, len);
>> +
>> + return ret ? 0 : 1;
>> +}
>> +
>> +static int vgic_io_dev_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
>> + gpa_t addr, int len, void *val)
>> +{
>> + return vgic_io_dev_access(vcpu, this, addr, len, val, false);
>> +}
>> +
>> +static int vgic_io_dev_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
>> + gpa_t addr, int len, const void *val)
>> +{
>> + return vgic_io_dev_access(vcpu, this, addr, len, (void *)val, true);
>> +}
>> +
>> +static const struct kvm_io_device_ops vgic_io_dev_ops = {
>> + .read = vgic_io_dev_read,
>> + .write = vgic_io_dev_write,
>> +};
>> +
>> +static int vgic_register_kvm_io_dev(struct kvm *kvm)
>> +{
>> + int len, ret;
>> +
>> + struct vgic_dist *dist = &kvm->arch.vgic;
>> + unsigned long base = dist->vgic_dist_base;
>> + u32 type = kvm->arch.vgic.vgic_model;
>> + struct kvm_io_device *dev;
>> +
>> + if (IS_VGIC_ADDR_UNDEF(base)) {
>> + kvm_err("Need to set vgic distributor address first\n");
>> + return -ENXIO;
>> + }
>> +
>> + dev = kzalloc(sizeof(struct kvm_io_device), GFP_KERNEL);
>> + if (!dev)
>> + return -ENOMEM;
> what was the outcome of the dynamic/static allocation discussion?
To have a static member I have to add virt/kvm/iodev.h in include/kvm/arm_vgic.h
And then I get this error at the earliest stage of kernel compilation:
CC arch/arm64/kernel/asm-offsets.s
In file included from ./arch/arm64/include/asm/kvm_host.h:41:0,
from include/linux/kvm_host.h:34,
from arch/arm64/kernel/asm-offsets.c:24:
include/kvm/arm_vgic.h:28:19: fatal error: iodev.h: No such file or directory
This one is invoked from the toplevel Kbuild script when trying to
generate the include/generated/asm-offsets.h
I didn't manage to find and obvious way to add "-I virt/kvm" for this
file's compilation. arch/arm64/kernel/Makefile is not used at this
stage.
>> +
>> + switch (type) {
>> + case KVM_DEV_TYPE_ARM_VGIC_V2:
>> + len = KVM_VGIC_V2_DIST_SIZE;
>> + break;
>> +#ifdef CONFIG_ARM_GIC_V3
>> + case KVM_DEV_TYPE_ARM_VGIC_V3:
>> + len = KVM_VGIC_V3_DIST_SIZE;
>> + break;
>> +#endif
>> + default:
>> + kvm_err("Unsupported VGIC model\n");
>> + goto out_free_dev;
>> + break;
> may be removed
>> + }
>> +
>> + kvm_iodevice_init(dev, &vgic_io_dev_ops);
>> +
>> + mutex_lock(&kvm->slots_lock);
>> +
>> + ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS,
>> + base, len, dev);
>> + if (ret < 0)
>> + goto out_unlock;
>> + mutex_unlock(&kvm->slots_lock);
>> +
>> + kvm->arch.vgic.io_dev = dev;
>> +
>> + return 0;
>> +
>> +out_unlock:
>> + mutex_unlock(&kvm->slots_lock);
>> +out_free_dev:
>> + kfree(dev);
>> + return ret;
>> +}
>> +
>> +static void vgic_unregister_kvm_io_dev(struct kvm *kvm)
>> +{
>> + struct vgic_dist *dist = &kvm->arch.vgic;
>> +
>> + if (dist) {
>> + kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, dist->io_dev);
>> + kfree(dist->io_dev);
>> + dist->io_dev = NULL;
> could be put in a destructor function but not sure it is worth the candle.
>> + }
>> }
>>
>> static int vgic_nr_shared_irqs(struct vgic_dist *dist)
>> @@ -1427,6 +1530,8 @@ void kvm_vgic_destroy(struct kvm *kvm)
>> struct kvm_vcpu *vcpu;
>> int i;
>>
>> + vgic_unregister_kvm_io_dev(kvm);
>> +
>> kvm_for_each_vcpu(i, vcpu, kvm)
>> kvm_vgic_vcpu_destroy(vcpu);
>>
>> @@ -1548,6 +1653,8 @@ int kvm_vgic_init(struct kvm *kvm)
>> if (vgic_initialized(kvm))
>> goto out;
>>
>> + vgic_register_kvm_io_dev(kvm);
>> +
> should happen in kvm_vgic_map_resources now after rebase on
> Christoffer's series.
>
> Best Regards
>
> Eric
>> ret = vgic_init_maps(kvm);
>> if (ret) {
>> kvm_err("Unable to allocate maps\n");
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
2015-01-12 17:09 ` Eric Auger
2015-01-12 17:48 ` Eric Auger
@ 2015-01-24 1:02 ` Nikolay Nikolaev
2015-01-27 21:38 ` Christoffer Dall
1 sibling, 1 reply; 17+ messages in thread
From: Nikolay Nikolaev @ 2015-01-24 1:02 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Jan 12, 2015 at 7:09 PM, Eric Auger <eric.auger@linaro.org> wrote:
> Hi Nikolay,
> On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
>> On IO memory abort, try to handle the MMIO access thorugh the KVM
>> registered read/write callbacks. This is done by invoking the relevant
>> kvm_io_bus_* API.
>>
>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>> ---
>> arch/arm/kvm/mmio.c | 33 +++++++++++++++++++++++++++++++++
>> 1 file changed, 33 insertions(+)
>>
>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>> index 4cb5a93..e42469f 100644
>> --- a/arch/arm/kvm/mmio.c
>> +++ b/arch/arm/kvm/mmio.c
>> @@ -162,6 +162,36 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>> return 0;
>> }
>>
>> +/**
>> + * handle_kernel_mmio - handle an in-kernel MMIO access
>> + * @vcpu: pointer to the vcpu performing the access
>> + * @run: pointer to the kvm_run structure
>> + * @mmio: pointer to the data describing the access
>> + *
>> + * returns true if the MMIO access has been performed in kernel space,
>> + * and false if it needs to be emulated in user space.
>> + */
>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> + struct kvm_exit_mmio *mmio)
>> +{
>> + int ret;
>> +
>> + if (mmio->is_write) {
>> + ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
>> + mmio->len, &mmio->data);
>> +
>> + } else {
>> + ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
>> + mmio->len, &mmio->data);
>> + }
>> + if (!ret) {
>> + kvm_prepare_mmio(run, mmio);
>> + kvm_handle_mmio_return(vcpu, run);
>> + }
>> +
>> + return !ret;
> in case ret < 0 (-EOPNOTSUPP = -95) aren't we returning true too? return
> (ret==0)?
>
>> +}
>> +
>> int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> phys_addr_t fault_ipa)
>> {
>> @@ -200,6 +230,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> if (vgic_handle_mmio(vcpu, run, &mmio))
>> return 1;
>>
>> + if (handle_kernel_mmio(vcpu, run, &mmio))
>> + return 1;
>> +
>> kvm_prepare_mmio(run, &mmio);
>> return 0;
> currently the io_mem_abort returned value is not used by mmu.c code. I
> think this should be handed in kvm_handle_guest_abort. What do you think?
You're right that the returned value is not handled further after we
exit io_mem_abort, it's just passed up the call stack.
However I'm not sure how to handle it better. If you have ideas, please share.
regards,
Nikolay Nikolaev
>
> Best Regards
>
> Eric
>> }
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
2015-01-24 1:02 ` Nikolay Nikolaev
@ 2015-01-27 21:38 ` Christoffer Dall
2015-01-28 11:08 ` Eric Auger
0 siblings, 1 reply; 17+ messages in thread
From: Christoffer Dall @ 2015-01-27 21:38 UTC (permalink / raw)
To: linux-arm-kernel
On Sat, Jan 24, 2015 at 03:02:33AM +0200, Nikolay Nikolaev wrote:
> On Mon, Jan 12, 2015 at 7:09 PM, Eric Auger <eric.auger@linaro.org> wrote:
> > Hi Nikolay,
> > On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
> >> On IO memory abort, try to handle the MMIO access thorugh the KVM
> >> registered read/write callbacks. This is done by invoking the relevant
> >> kvm_io_bus_* API.
> >>
> >> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> >> ---
> >> arch/arm/kvm/mmio.c | 33 +++++++++++++++++++++++++++++++++
> >> 1 file changed, 33 insertions(+)
> >>
> >> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
> >> index 4cb5a93..e42469f 100644
> >> --- a/arch/arm/kvm/mmio.c
> >> +++ b/arch/arm/kvm/mmio.c
> >> @@ -162,6 +162,36 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >> return 0;
> >> }
> >>
> >> +/**
> >> + * handle_kernel_mmio - handle an in-kernel MMIO access
> >> + * @vcpu: pointer to the vcpu performing the access
> >> + * @run: pointer to the kvm_run structure
> >> + * @mmio: pointer to the data describing the access
> >> + *
> >> + * returns true if the MMIO access has been performed in kernel space,
> >> + * and false if it needs to be emulated in user space.
> >> + */
> >> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> >> + struct kvm_exit_mmio *mmio)
> >> +{
> >> + int ret;
> >> +
> >> + if (mmio->is_write) {
> >> + ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
> >> + mmio->len, &mmio->data);
> >> +
> >> + } else {
> >> + ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
> >> + mmio->len, &mmio->data);
> >> + }
> >> + if (!ret) {
> >> + kvm_prepare_mmio(run, mmio);
> >> + kvm_handle_mmio_return(vcpu, run);
> >> + }
> >> +
> >> + return !ret;
> > in case ret < 0 (-EOPNOTSUPP = -95) aren't we returning true too? return
> > (ret==0)?
> >
> >> +}
> >> +
> >> int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> >> phys_addr_t fault_ipa)
> >> {
> >> @@ -200,6 +230,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> >> if (vgic_handle_mmio(vcpu, run, &mmio))
> >> return 1;
> >>
> >> + if (handle_kernel_mmio(vcpu, run, &mmio))
> >> + return 1;
> >> +
> >> kvm_prepare_mmio(run, &mmio);
> >> return 0;
> > currently the io_mem_abort returned value is not used by mmu.c code. I
> > think this should be handed in kvm_handle_guest_abort. What do you think?
>
> You're right that the returned value is not handled further after we
> exit io_mem_abort, it's just passed up the call stack.
> However I'm not sure how to handle it better. If you have ideas, please share.
>
I'm confused: the return value from io_mem_abort is assigned to a
variable 'ret' in kvm_handle_guest_abort and that determines if we
should run the VM again or return to userspace (with some work for
userspace to do or with an error).
-Christoffer
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus
2015-01-27 21:38 ` Christoffer Dall
@ 2015-01-28 11:08 ` Eric Auger
0 siblings, 0 replies; 17+ messages in thread
From: Eric Auger @ 2015-01-28 11:08 UTC (permalink / raw)
To: linux-arm-kernel
On 01/27/2015 10:38 PM, Christoffer Dall wrote:
> On Sat, Jan 24, 2015 at 03:02:33AM +0200, Nikolay Nikolaev wrote:
>> On Mon, Jan 12, 2015 at 7:09 PM, Eric Auger <eric.auger@linaro.org> wrote:
>>> Hi Nikolay,
>>> On 12/07/2014 10:37 AM, Nikolay Nikolaev wrote:
>>>> On IO memory abort, try to handle the MMIO access thorugh the KVM
>>>> registered read/write callbacks. This is done by invoking the relevant
>>>> kvm_io_bus_* API.
>>>>
>>>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>>>> ---
>>>> arch/arm/kvm/mmio.c | 33 +++++++++++++++++++++++++++++++++
>>>> 1 file changed, 33 insertions(+)
>>>>
>>>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>>>> index 4cb5a93..e42469f 100644
>>>> --- a/arch/arm/kvm/mmio.c
>>>> +++ b/arch/arm/kvm/mmio.c
>>>> @@ -162,6 +162,36 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>> return 0;
>>>> }
>>>>
>>>> +/**
>>>> + * handle_kernel_mmio - handle an in-kernel MMIO access
>>>> + * @vcpu: pointer to the vcpu performing the access
>>>> + * @run: pointer to the kvm_run structure
>>>> + * @mmio: pointer to the data describing the access
>>>> + *
>>>> + * returns true if the MMIO access has been performed in kernel space,
>>>> + * and false if it needs to be emulated in user space.
>>>> + */
>>>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>> + struct kvm_exit_mmio *mmio)
>>>> +{
>>>> + int ret;
>>>> +
>>>> + if (mmio->is_write) {
>>>> + ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
>>>> + mmio->len, &mmio->data);
>>>> +
>>>> + } else {
>>>> + ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, mmio->phys_addr,
>>>> + mmio->len, &mmio->data);
>>>> + }
>>>> + if (!ret) {
>>>> + kvm_prepare_mmio(run, mmio);
>>>> + kvm_handle_mmio_return(vcpu, run);
>>>> + }
>>>> +
>>>> + return !ret;
>>> in case ret < 0 (-EOPNOTSUPP = -95) aren't we returning true too? return
>>> (ret==0)?
>>>
>>>> +}
>>>> +
>>>> int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>> phys_addr_t fault_ipa)
>>>> {
>>>> @@ -200,6 +230,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>> if (vgic_handle_mmio(vcpu, run, &mmio))
>>>> return 1;
>>>>
>>>> + if (handle_kernel_mmio(vcpu, run, &mmio))
>>>> + return 1;
>>>> +
>>>> kvm_prepare_mmio(run, &mmio);
>>>> return 0;
>>> currently the io_mem_abort returned value is not used by mmu.c code. I
>>> think this should be handed in kvm_handle_guest_abort. What do you think?
>>
>> You're right that the returned value is not handled further after we
>> exit io_mem_abort, it's just passed up the call stack.
>> However I'm not sure how to handle it better. If you have ideas, please share.
>>
> I'm confused: the return value from io_mem_abort is assigned to a
> variable 'ret' in kvm_handle_guest_abort and that determines if we
> should run the VM again or return to userspace (with some work for
> userspace to do or with an error).
hum well apologies for that. Guess I was tired when reading that piece
of code :-(
Best Regards
Eric
>
> -Christoffer
>
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2015-01-28 11:08 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-12-07 9:37 [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Nikolay Nikolaev
2014-12-07 9:37 ` [PATCH v2 1/5] KVM: Redesign kvm_io_bus_ API to pass VCPU structure to the callbacks Nikolay Nikolaev
2015-01-12 17:10 ` Eric Auger
2014-12-07 9:37 ` [PATCH v2 2/5] KVM: ARM: on IO mem abort - route the call to KVM MMIO bus Nikolay Nikolaev
2015-01-12 17:09 ` Eric Auger
2015-01-12 17:48 ` Eric Auger
2015-01-24 1:02 ` Nikolay Nikolaev
2015-01-27 21:38 ` Christoffer Dall
2015-01-28 11:08 ` Eric Auger
2014-12-07 9:37 ` [PATCH v2 3/5] KVM: ARM VGIC add kvm_io_bus_ frontend Nikolay Nikolaev
2015-01-12 21:41 ` Eric Auger
2015-01-24 0:57 ` Nikolay Nikolaev
2014-12-07 9:38 ` [PATCH v2 4/5] ARM/ARM64: enable linking against eventfd Nikolay Nikolaev
2014-12-07 9:38 ` [PATCH v2 5/5] ARM: enable KVM_CAP_IOEVENTFD Nikolay Nikolaev
2015-01-12 21:46 ` [PATCH v2 0/5] ARM: KVM: Enable the ioeventfd capability of KVM on ARM Eric Auger
[not found] ` <CADDJ2=M2UjsV0U9cFiRoKghWSckWje+h6M-XQ-0dqPrH3BXp1A@mail.gmail.com>
2015-01-15 15:31 ` Eric Auger
2015-01-15 19:47 ` Christoffer Dall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).