* [PATCH v3 00/13] KVM/ARM vGIC support
@ 2012-10-22 6:51 Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl Christoffer Dall
` (12 more replies)
0 siblings, 13 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:51 UTC (permalink / raw)
To: linux-arm-kernel
The following series implements support for the virtual generic
interrupt controller architecture for KVM/ARM.
Changes since v2:
- Get rid of hardcoded guest cpu and distributor physical addresses
and instead provide the address through the KVM_SET_DEVICE_ADDRESS
ioctl.
- Fix level/edge bugs
- Fix reboot bug: retire queued, disabled interrupts
This patch series can also be pulled from:
git://github.com/virtualopensystems/linux-kvm-arm.git
branch: kvm-arm-v13-vgic
---
Christoffer Dall (2):
KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl
ARM: KVM: VGIC accept vcpu and dist base addresses from user space
Marc Zyngier (11):
ARM: KVM: Keep track of currently running vcpus
ARM: KVM: Initial VGIC infrastructure support
ARM: KVM: Initial VGIC MMIO support code
ARM: KVM: VGIC distributor handling
ARM: KVM: VGIC virtual CPU interface management
ARM: KVM: vgic: retire queued, disabled interrupts
ARM: KVM: VGIC interrupt injection
ARM: KVM: VGIC control interface world switch
ARM: KVM: VGIC initialisation code
ARM: KVM: vgic: reduce the number of vcpu kick
ARM: KVM: Add VGIC configuration option
Documentation/virtual/kvm/api.txt | 37 +
arch/arm/include/asm/kvm_arm.h | 12
arch/arm/include/asm/kvm_host.h | 17 +
arch/arm/include/asm/kvm_mmu.h | 2
arch/arm/include/asm/kvm_vgic.h | 320 +++++++++
arch/arm/include/uapi/asm/kvm.h | 13
arch/arm/kernel/asm-offsets.c | 12
arch/arm/kvm/Kconfig | 7
arch/arm/kvm/Makefile | 1
arch/arm/kvm/arm.c | 138 ++++
arch/arm/kvm/interrupts.S | 4
arch/arm/kvm/interrupts_head.S | 68 ++
arch/arm/kvm/mmio.c | 3
arch/arm/kvm/vgic.c | 1251 +++++++++++++++++++++++++++++++++++++
include/uapi/linux/kvm.h | 8
virt/kvm/kvm_main.c | 5
16 files changed, 1893 insertions(+), 5 deletions(-)
create mode 100644 arch/arm/include/asm/kvm_vgic.h
create mode 100644 arch/arm/kvm/vgic.c
--
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
@ 2012-10-22 6:51 ` Christoffer Dall
2012-11-09 13:45 ` [kvmarm] " Peter Maydell
2012-10-22 6:51 ` [PATCH v3 02/13] ARM: KVM: Keep track of currently running vcpus Christoffer Dall
` (11 subsequent siblings)
12 siblings, 1 reply; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:51 UTC (permalink / raw)
To: linux-arm-kernel
On ARM (and possibly other architectures) some bits are specific to the
model being emulated for the guest and user space needs a way to tell
the kernel about those bits. An example is mmio device base addresses,
where KVM must know the base address for a given device to properly
emulate mmio accesses within a certain address range or directly map a
device with virtualiation extensions into the guest address space.
We try to make this API slightly more generic than for our specific use,
but so far only the VGIC uses this feature.
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
Documentation/virtual/kvm/api.txt | 37 +++++++++++++++++++++++++++++++++++++
arch/arm/include/uapi/asm/kvm.h | 13 +++++++++++++
arch/arm/kvm/arm.c | 24 +++++++++++++++++++++++-
include/uapi/linux/kvm.h | 8 ++++++++
4 files changed, 81 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 764c5df..428b625 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2191,6 +2191,43 @@ This ioctl returns the guest registers that are supported for the
KVM_GET_ONE_REG/KVM_SET_ONE_REG calls.
+4.80 KVM_SET_DEVICE_ADDRESS
+
+Capability: KVM_CAP_SET_DEVICE_ADDRESS
+Architectures: arm
+Type: vm ioctl
+Parameters: struct kvm_device_address (in)
+Returns: 0 on success, -1 on error
+Errors:
+ ENODEV: The device id is unknown
+ ENXIO: Device not supported on current system
+ EEXIST: Address already set
+ E2BIG: Address outside guest physical address space
+
+struct kvm_device_address {
+ __u32 id;
+ __u64 addr;
+};
+
+Specify a device address in the guest's physical address space where guests
+can access emulated or directly exposed devices, which the host kernel needs
+to know about. The id field is an architecture specific identifier for a
+specific device.
+
+ARM divides the id field into two parts, a device id and an address type id
+specific to the individual device.
+
+ ?bits: | 31 ... 16 | 15 ... 0 |
+ field: | device id | addr type id |
+
+ARM currently only require this when using the in-kernel GIC support for the
+hardware vGIC features, using KVM_ARM_DEVICE_VGIC_V2 as the device id. When
+setting the base address for the guest's mapping of the vGIC virtual CPU
+and distributor interface, the ioctl must be called after calling
+KVM_CREATE_IRQCHIP, but before calling KVM_RUN on any of the VCPUs. Calling
+this ioctl twice for any of the base addresses will return -EEXIST.
+
+
5. The kvm_run structure
------------------------
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index fb41608..a7ae073 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -42,6 +42,19 @@ struct kvm_regs {
#define KVM_ARM_TARGET_CORTEX_A15 0
#define KVM_ARM_NUM_TARGETS 1
+/* KVM_SET_DEVICE_ADDRESS ioctl id encoding */
+#define KVM_DEVICE_TYPE_SHIFT 0
+#define KVM_DEVICE_TYPE_MASK (0xffff << KVM_DEVICE_TYPE_SHIFT)
+#define KVM_DEVICE_ID_SHIFT 16
+#define KVM_DEVICE_ID_MASK (0xffff << KVM_DEVICE_ID_SHIFT)
+
+/* Supported device IDs */
+#define KVM_ARM_DEVICE_VGIC_V2 0
+
+/* Supported VGIC address types */
+#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
+#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
+
struct kvm_vcpu_init {
__u32 target;
__u32 features[7];
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index acdfa63..c192399 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -164,6 +164,9 @@ int kvm_dev_ioctl_check_extension(long ext)
case KVM_CAP_COALESCED_MMIO:
r = KVM_COALESCED_MMIO_PAGE_OFFSET;
break;
+ case KVM_CAP_SET_DEVICE_ADDR:
+ r = 1;
+ break;
default:
r = 0;
break;
@@ -776,10 +779,29 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
return -EINVAL;
}
+static int kvm_vm_ioctl_set_device_address(struct kvm *kvm,
+ struct kvm_device_address *dev_addr)
+{
+ return -ENODEV;
+}
+
long kvm_arch_vm_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
- return -EINVAL;
+ struct kvm *kvm = filp->private_data;
+ void __user *argp = (void __user *)arg;
+
+ switch (ioctl) {
+ case KVM_SET_DEVICE_ADDRESS: {
+ struct kvm_device_address dev_addr;
+
+ if (copy_from_user(&dev_addr, argp, sizeof(dev_addr)))
+ return -EFAULT;
+ return kvm_vm_ioctl_set_device_address(kvm, &dev_addr);
+ }
+ default:
+ return -EINVAL;
+ }
}
static void cpu_init_hyp_mode(void *vector)
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 72f018b..1aaa15e 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -631,6 +631,7 @@ struct kvm_ppc_smmu_info {
#endif
#define KVM_CAP_IRQFD_RESAMPLE 82
#define KVM_CAP_PPC_BOOKE_WATCHDOG 83
+#define KVM_CAP_SET_DEVICE_ADDR 84
#ifdef KVM_CAP_IRQ_ROUTING
@@ -778,6 +779,11 @@ struct kvm_msi {
__u8 pad[16];
};
+struct kvm_device_address {
+ __u32 id;
+ __u64 addr;
+};
+
/*
* ioctls for VM fds
*/
@@ -861,6 +867,8 @@ struct kvm_s390_ucas_mapping {
#define KVM_CREATE_SPAPR_TCE _IOW(KVMIO, 0xa8, struct kvm_create_spapr_tce)
/* Available with KVM_CAP_RMA */
#define KVM_ALLOCATE_RMA _IOR(KVMIO, 0xa9, struct kvm_allocate_rma)
+/* Available with KVM_CAP_SET_DEVICE_ADDR */
+#define KVM_SET_DEVICE_ADDRESS _IOW(KVMIO, 0xaa, struct kvm_device_address)
/*
* ioctls for vcpu fds
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 02/13] ARM: KVM: Keep track of currently running vcpus
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl Christoffer Dall
@ 2012-10-22 6:51 ` Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 03/13] ARM: KVM: Initial VGIC infrastructure support Christoffer Dall
` (10 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:51 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
When an interrupt occurs for the guest, it is sometimes necessary
to find out which vcpu was running at that point.
Keep track of which vcpu is being tun in kvm_arch_vcpu_ioctl_run(),
and allow the data to be retrived using either:
- kvm_arm_get_running_vcpu(): returns the vcpu running at this point
on the current CPU. Can only be used in a non-preemptable context.
- kvm_arm_get_running_vcpus(): returns the per-CPU variable holding
the the running vcpus, useable for per-CPU interrupts.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_host.h | 10 ++++++++++
arch/arm/kvm/arm.c | 30 ++++++++++++++++++++++++++++++
2 files changed, 40 insertions(+)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 2eddd96..c6f1102 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -156,4 +156,14 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
{
return 0;
}
+
+struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
+struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
+
+int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
+unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
+struct kvm_one_reg;
+int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
+int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
+
#endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index c192399..828b5af 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -53,11 +53,38 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
static struct vfp_hard_struct __percpu *kvm_host_vfp_state;
static unsigned long hyp_default_vectors;
+/* Per-CPU variable containing the currently running vcpu. */
+static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu);
+
/* The VMID used in the VTTBR */
static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
static u8 kvm_next_vmid;
static DEFINE_SPINLOCK(kvm_vmid_lock);
+static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
+{
+ BUG_ON(preemptible());
+ __get_cpu_var(kvm_arm_running_vcpu) = vcpu;
+}
+
+/**
+ * kvm_arm_get_running_vcpu - get the vcpu running on the current CPU.
+ * Must be called from non-preemptible context
+ */
+struct kvm_vcpu *kvm_arm_get_running_vcpu(void)
+{
+ BUG_ON(preemptible());
+ return __get_cpu_var(kvm_arm_running_vcpu);
+}
+
+/**
+ * kvm_arm_get_running_vcpus - get the per-CPU array on currently running vcpus.
+ */
+struct kvm_vcpu __percpu **kvm_get_running_vcpus(void)
+{
+ return &kvm_arm_running_vcpu;
+}
+
int kvm_arch_hardware_enable(void *garbage)
{
return 0;
@@ -299,10 +326,13 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
flush_cache_all(); /* We'd really want v7_flush_dcache_all() */
}
+
+ kvm_arm_set_running_vcpu(vcpu);
}
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
+ kvm_arm_set_running_vcpu(NULL);
}
int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 03/13] ARM: KVM: Initial VGIC infrastructure support
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 02/13] ARM: KVM: Keep track of currently running vcpus Christoffer Dall
@ 2012-10-22 6:51 ` Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 04/13] ARM: KVM: Initial VGIC MMIO support code Christoffer Dall
` (9 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:51 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Wire the basic framework code for VGIC support. Nothing to enable
yet.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_host.h | 7 ++++
arch/arm/include/asm/kvm_vgic.h | 70 +++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/arm.c | 21 +++++++++++-
arch/arm/kvm/interrupts.S | 4 ++
arch/arm/kvm/mmio.c | 3 ++
virt/kvm/kvm_main.c | 5 ++-
6 files changed, 107 insertions(+), 3 deletions(-)
create mode 100644 arch/arm/include/asm/kvm_vgic.h
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index c6f1102..9bbccdf 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -22,6 +22,7 @@
#include <asm/kvm.h>
#include <asm/kvm_asm.h>
#include <asm/fpstate.h>
+#include <asm/kvm_vgic.h>
#define KVM_MAX_VCPUS NR_CPUS
#define KVM_MEMORY_SLOTS 32
@@ -57,6 +58,9 @@ struct kvm_arch {
/* Stage-2 page table */
pgd_t *pgd;
+
+ /* Interrupt controller */
+ struct vgic_dist vgic;
};
#define KVM_NR_MEM_OBJS 40
@@ -91,6 +95,9 @@ struct kvm_vcpu_arch {
struct vfp_hard_struct vfp_guest;
struct vfp_hard_struct *vfp_host;
+ /* VGIC state */
+ struct vgic_cpu vgic_cpu;
+
/*
* Anything that is not used directly from assembly code goes
* here.
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
new file mode 100644
index 0000000..d75540a
--- /dev/null
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __ASM_ARM_KVM_VGIC_H
+#define __ASM_ARM_KVM_VGIC_H
+
+struct vgic_dist {
+};
+
+struct vgic_cpu {
+};
+
+struct kvm;
+struct kvm_vcpu;
+struct kvm_run;
+struct kvm_exit_mmio;
+
+#ifndef CONFIG_KVM_ARM_VGIC
+static inline int kvm_vgic_hyp_init(void)
+{
+ return 0;
+}
+
+static inline int kvm_vgic_init(struct kvm *kvm)
+{
+ return 0;
+}
+
+static inline int kvm_vgic_create(struct kvm *kvm)
+{
+ return 0;
+}
+
+static inline void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu) {}
+static inline void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu) {}
+static inline void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu) {}
+
+static inline int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
+{
+ return 0;
+}
+
+static inline bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_exit_mmio *mmio)
+{
+ return false;
+}
+
+static inline int irqchip_in_kernel(struct kvm *kvm)
+{
+ return 0;
+}
+#endif
+
+#endif
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 828b5af..a57b107 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -183,6 +183,9 @@ int kvm_dev_ioctl_check_extension(long ext)
{
int r;
switch (ext) {
+#ifdef CONFIG_KVM_ARM_VGIC
+ case KVM_CAP_IRQCHIP:
+#endif
case KVM_CAP_USER_MEMORY:
case KVM_CAP_DESTROY_MEMORY_REGION_WORKS:
case KVM_CAP_ONE_REG:
@@ -304,6 +307,10 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
{
/* Force users to call KVM_ARM_VCPU_INIT */
vcpu->arch.target = -1;
+
+ /* Set up VGIC */
+ kvm_vgic_vcpu_init(vcpu);
+
return 0;
}
@@ -363,7 +370,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
*/
int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
{
- return !!v->arch.irq_lines;
+ return !!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v);
}
int kvm_arch_vcpu_in_guest_mode(struct kvm_vcpu *v)
@@ -632,6 +639,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
update_vttbr(vcpu->kvm);
+ kvm_vgic_sync_to_cpu(vcpu);
+
local_irq_disable();
/*
@@ -644,6 +653,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
local_irq_enable();
+ kvm_vgic_sync_from_cpu(vcpu);
continue;
}
@@ -682,6 +692,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
* Back from guest
*************************************************************/
+ kvm_vgic_sync_from_cpu(vcpu);
+
ret = handle_exit(vcpu, run, ret);
}
@@ -964,6 +976,13 @@ static int init_hyp_mode(void)
}
}
+ /*
+ * Init HYP view of VGIC
+ */
+ err = kvm_vgic_hyp_init();
+ if (err)
+ goto out_free_mappings;
+
return 0;
out_free_vfp:
free_percpu(kvm_host_vfp_state);
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 7c89708..e418c9b 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -91,6 +91,8 @@ ENTRY(__kvm_vcpu_run)
save_host_regs
+ restore_vgic_state r0
+
@ Store hardware CP15 state and load guest state
read_cp15_state
write_cp15_state 1, r0
@@ -184,6 +186,8 @@ after_vfp_restore:
read_cp15_state 1, r1
write_cp15_state
+ save_vgic_state r1
+
restore_host_regs
clrex @ Clear exclusive monitor
bx lr @ return to IOCTL
diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
index 28bd5eb..beb7134 100644
--- a/arch/arm/kvm/mmio.c
+++ b/arch/arm/kvm/mmio.c
@@ -146,6 +146,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
if (mmio.is_write)
memcpy(mmio.data, vcpu_reg(vcpu, rd), mmio.len);
+ if (vgic_handle_mmio(vcpu, run, &mmio))
+ return 1;
+
kvm_prepare_mmio(run, &mmio);
return 0;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e59bb63..bef668b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1882,12 +1882,13 @@ static long kvm_vcpu_ioctl(struct file *filp,
if (vcpu->kvm->mm != current->mm)
return -EIO;
-#if defined(CONFIG_S390) || defined(CONFIG_PPC)
+#if defined(CONFIG_S390) || defined(CONFIG_PPC) || defined(CONFIG_ARM)
/*
* Special cases: vcpu ioctls that are asynchronous to vcpu execution,
* so vcpu_load() would break it.
*/
- if (ioctl == KVM_S390_INTERRUPT || ioctl == KVM_INTERRUPT)
+ if (ioctl == KVM_S390_INTERRUPT || ioctl == KVM_INTERRUPT ||
+ ioctl == KVM_IRQ_LINE)
return kvm_arch_vcpu_ioctl(filp, ioctl, arg);
#endif
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 04/13] ARM: KVM: Initial VGIC MMIO support code
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (2 preceding siblings ...)
2012-10-22 6:51 ` [PATCH v3 03/13] ARM: KVM: Initial VGIC infrastructure support Christoffer Dall
@ 2012-10-22 6:51 ` Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 05/13] ARM: KVM: VGIC accept vcpu and dist base addresses from user space Christoffer Dall
` (8 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:51 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Wire the initial in-kernel MMIO support code for the VGIC, used
for the distributor emulation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 6 +-
arch/arm/kvm/Makefile | 1
arch/arm/kvm/vgic.c | 138 +++++++++++++++++++++++++++++++++++++++
3 files changed, 144 insertions(+), 1 deletion(-)
create mode 100644 arch/arm/kvm/vgic.c
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index d75540a..b444ecf 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -30,7 +30,11 @@ struct kvm_vcpu;
struct kvm_run;
struct kvm_exit_mmio;
-#ifndef CONFIG_KVM_ARM_VGIC
+#ifdef CONFIG_KVM_ARM_VGIC
+bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_exit_mmio *mmio);
+
+#else
static inline int kvm_vgic_hyp_init(void)
{
return 0;
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index 574c67c..3370c09 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -20,3 +20,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesc
obj-$(CONFIG_KVM_ARM_HOST) += arm.o guest.o mmu.o emulate.o reset.o
obj-$(CONFIG_KVM_ARM_HOST) += coproc.o coproc_a15.o mmio.o
+obj-$(CONFIG_KVM_ARM_VGIC) += vgic.o
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
new file mode 100644
index 0000000..26ada3b
--- /dev/null
+++ b/arch/arm/kvm/vgic.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <asm/kvm_emulate.h>
+
+#define ACCESS_READ_VALUE (1 << 0)
+#define ACCESS_READ_RAZ (0 << 0)
+#define ACCESS_READ_MASK(x) ((x) & (1 << 0))
+#define ACCESS_WRITE_IGNORED (0 << 1)
+#define ACCESS_WRITE_SETBIT (1 << 1)
+#define ACCESS_WRITE_CLEARBIT (2 << 1)
+#define ACCESS_WRITE_VALUE (3 << 1)
+#define ACCESS_WRITE_MASK(x) ((x) & (3 << 1))
+
+/**
+ * vgic_reg_access - access vgic register
+ * @mmio: pointer to the data describing the mmio access
+ * @reg: pointer to the virtual backing of the vgic distributor struct
+ * @offset: least significant 2 bits used for word offset
+ * @mode: ACCESS_ mode (see defines above)
+ *
+ * Helper to make vgic register access easier using one of the access
+ * modes defined for vgic register access
+ * (read,raz,write-ignored,setbit,clearbit,write)
+ */
+static void vgic_reg_access(struct kvm_exit_mmio *mmio, u32 *reg,
+ u32 offset, int mode)
+{
+ int word_offset = offset & 3;
+ int shift = word_offset * 8;
+ u32 mask;
+ u32 regval;
+
+ /*
+ * Any alignment fault should have been delivered to the guest
+ * directly (ARM ARM B3.12.7 "Prioritization of aborts").
+ */
+
+ mask = (~0U) >> (word_offset * 8);
+ if (reg)
+ regval = *reg;
+ else {
+ BUG_ON(mode != (ACCESS_READ_RAZ | ACCESS_WRITE_IGNORED));
+ regval = 0;
+ }
+
+ if (mmio->is_write) {
+ u32 data = (*((u32 *)mmio->data) & mask) << shift;
+ switch (ACCESS_WRITE_MASK(mode)) {
+ case ACCESS_WRITE_IGNORED:
+ return;
+
+ case ACCESS_WRITE_SETBIT:
+ regval |= data;
+ break;
+
+ case ACCESS_WRITE_CLEARBIT:
+ regval &= ~data;
+ break;
+
+ case ACCESS_WRITE_VALUE:
+ regval = (regval & ~(mask << shift)) | data;
+ break;
+ }
+ *reg = regval;
+ } else {
+ switch (ACCESS_READ_MASK(mode)) {
+ case ACCESS_READ_RAZ:
+ regval = 0;
+ /* fall through */
+
+ case ACCESS_READ_VALUE:
+ *((u32 *)mmio->data) = (regval >> shift) & mask;
+ }
+ }
+}
+
+/* All this should be handled by kvm_bus_io_*()... FIXME!!! */
+struct mmio_range {
+ unsigned long base;
+ unsigned long len;
+ bool (*handle_mmio)(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio,
+ u32 offset);
+};
+
+static const struct mmio_range vgic_ranges[] = {
+ {}
+};
+
+static const
+struct mmio_range *find_matching_range(const struct mmio_range *ranges,
+ struct kvm_exit_mmio *mmio,
+ unsigned long base)
+{
+ const struct mmio_range *r = ranges;
+ unsigned long addr = mmio->phys_addr - base;
+
+ while (r->len) {
+ if (addr >= r->base &&
+ (addr + mmio->len) <= (r->base + r->len))
+ return r;
+ r++;
+ }
+
+ return NULL;
+}
+
+/**
+ * vgic_handle_mmio - handle an in-kernel MMIO access
+ * @vcpu: pointer to the vcpu performing the access
+ * @mmio: pointer to the data describing the access
+ *
+ * returns true if the MMIO access has been performed in kernel space,
+ * and false if it needs to be emulated in user space.
+ */
+bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exit_mmio *mmio)
+{
+ return KVM_EXIT_MMIO;
+}
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 05/13] ARM: KVM: VGIC accept vcpu and dist base addresses from user space
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (3 preceding siblings ...)
2012-10-22 6:51 ` [PATCH v3 04/13] ARM: KVM: Initial VGIC MMIO support code Christoffer Dall
@ 2012-10-22 6:51 ` Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 06/13] ARM: KVM: VGIC distributor handling Christoffer Dall
` (7 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:51 UTC (permalink / raw)
To: linux-arm-kernel
User space defines the model to emulate to a guest and should therefore
decide which addresses are used for both the virtual CPU interface
directly mapped in the guest physical address space and for the emulated
distributor interface, which is mapped in software by the in-kernel VGIC
support.
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_mmu.h | 2 +
arch/arm/include/asm/kvm_vgic.h | 9 ++++++
arch/arm/kvm/arm.c | 16 ++++++++++
arch/arm/kvm/vgic.c | 61 +++++++++++++++++++++++++++++++++++++++
4 files changed, 87 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 9bd0508..0800531 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -26,6 +26,8 @@
* To save a bit of memory and to avoid alignment issues we assume 39-bit IPA
* for now, but remember that the level-1 table must be aligned to its size.
*/
+#define KVM_PHYS_SHIFT (38)
+#define KVM_PHYS_MASK ((1ULL << KVM_PHYS_SHIFT) - 1)
#define PTRS_PER_PGD2 512
#define PGD2_ORDER get_order(PTRS_PER_PGD2 * sizeof(pgd_t))
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index b444ecf..9ca8d21 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -20,6 +20,9 @@
#define __ASM_ARM_KVM_VGIC_H
struct vgic_dist {
+ /* Distributor and vcpu interface mapping in the guest */
+ phys_addr_t vgic_dist_base;
+ phys_addr_t vgic_cpu_base;
};
struct vgic_cpu {
@@ -31,6 +34,7 @@ struct kvm_run;
struct kvm_exit_mmio;
#ifdef CONFIG_KVM_ARM_VGIC
+int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr);
bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
struct kvm_exit_mmio *mmio);
@@ -40,6 +44,11 @@ static inline int kvm_vgic_hyp_init(void)
return 0;
}
+static inline int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr)
+{
+ return 0;
+}
+
static inline int kvm_vgic_init(struct kvm *kvm)
{
return 0;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index a57b107..f92b4ec 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -61,6 +61,8 @@ static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
static u8 kvm_next_vmid;
static DEFINE_SPINLOCK(kvm_vmid_lock);
+static bool vgic_present;
+
static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
{
BUG_ON(preemptible());
@@ -824,7 +826,19 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
static int kvm_vm_ioctl_set_device_address(struct kvm *kvm,
struct kvm_device_address *dev_addr)
{
- return -ENODEV;
+ unsigned long dev_id, type;
+
+ dev_id = (dev_addr->id & KVM_DEVICE_ID_MASK) >> KVM_DEVICE_ID_SHIFT;
+ type = (dev_addr->id & KVM_DEVICE_TYPE_MASK) >> KVM_DEVICE_TYPE_SHIFT;
+
+ switch (dev_id) {
+ case KVM_ARM_DEVICE_VGIC_V2:
+ if (!vgic_present)
+ return -ENXIO;
+ return kvm_vgic_set_addr(kvm, type, dev_addr->addr);
+ default:
+ return -ENODEV;
+ }
}
long kvm_arch_vm_ioctl(struct file *filp,
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index 26ada3b..f85b275 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -22,6 +22,13 @@
#include <linux/io.h>
#include <asm/kvm_emulate.h>
+#define VGIC_ADDR_UNDEF (-1)
+#define IS_VGIC_ADDR_UNDEF(_x) ((_x) == (typeof(_x))VGIC_ADDR_UNDEF)
+
+#define VGIC_DIST_SIZE 0x1000
+#define VGIC_CPU_SIZE 0x2000
+
+
#define ACCESS_READ_VALUE (1 << 0)
#define ACCESS_READ_RAZ (0 << 0)
#define ACCESS_READ_MASK(x) ((x) & (1 << 0))
@@ -136,3 +143,57 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exi
{
return KVM_EXIT_MMIO;
}
+
+static bool vgic_ioaddr_overlap(struct kvm *kvm)
+{
+ phys_addr_t dist = kvm->arch.vgic.vgic_dist_base;
+ phys_addr_t cpu = kvm->arch.vgic.vgic_cpu_base;
+
+ if (IS_VGIC_ADDR_UNDEF(dist) || IS_VGIC_ADDR_UNDEF(cpu))
+ return false;
+ if ((dist <= cpu && dist + VGIC_DIST_SIZE > cpu) ||
+ (cpu <= dist && cpu + VGIC_CPU_SIZE > dist))
+ return true;
+ return false;
+}
+
+int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr)
+{
+ int r = 0;
+ struct vgic_dist *vgic = &kvm->arch.vgic;
+
+ if (addr & ~KVM_PHYS_MASK)
+ return -E2BIG;
+
+ if (addr & ~PAGE_MASK)
+ return -EINVAL;
+
+ mutex_lock(&kvm->lock);
+ switch (type) {
+ case KVM_VGIC_V2_ADDR_TYPE_DIST:
+ if (!IS_VGIC_ADDR_UNDEF(vgic->vgic_dist_base))
+ return -EEXIST;
+ if (addr + VGIC_DIST_SIZE < addr)
+ return -EINVAL;
+ kvm->arch.vgic.vgic_dist_base = addr;
+ break;
+ case KVM_VGIC_V2_ADDR_TYPE_CPU:
+ if (!IS_VGIC_ADDR_UNDEF(vgic->vgic_cpu_base))
+ return -EEXIST;
+ if (addr + VGIC_CPU_SIZE < addr)
+ return -EINVAL;
+ kvm->arch.vgic.vgic_cpu_base = addr;
+ break;
+ default:
+ r = -ENODEV;
+ }
+
+ if (vgic_ioaddr_overlap(kvm)) {
+ kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
+ kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
+ return -EINVAL;
+ }
+
+ mutex_unlock(&kvm->lock);
+ return r;
+}
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 06/13] ARM: KVM: VGIC distributor handling
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (4 preceding siblings ...)
2012-10-22 6:51 ` [PATCH v3 05/13] ARM: KVM: VGIC accept vcpu and dist base addresses from user space Christoffer Dall
@ 2012-10-22 6:51 ` Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 07/13] ARM: KVM: VGIC virtual CPU interface management Christoffer Dall
` (6 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:51 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Add the GIC distributor emulation code. A number of the GIC features
are simply ignored as they are not required to boot a Linux guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 167 ++++++++++++++
arch/arm/kvm/vgic.c | 471 +++++++++++++++++++++++++++++++++++++++
2 files changed, 637 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index 9ca8d21..9e60b1d 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -19,10 +19,177 @@
#ifndef __ASM_ARM_KVM_VGIC_H
#define __ASM_ARM_KVM_VGIC_H
+#include <linux/kernel.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/irqreturn.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#define VGIC_NR_IRQS 128
+#define VGIC_NR_SHARED_IRQS (VGIC_NR_IRQS - 32)
+#define VGIC_MAX_CPUS NR_CPUS
+
+/* Sanity checks... */
+#if (VGIC_MAX_CPUS > 8)
+#error Invalid number of CPU interfaces
+#endif
+
+#if (VGIC_NR_IRQS & 31)
+#error "VGIC_NR_IRQS must be a multiple of 32"
+#endif
+
+#if (VGIC_NR_IRQS > 1024)
+#error "VGIC_NR_IRQS must be <= 1024"
+#endif
+
+/*
+ * The GIC distributor registers describing interrupts have two parts:
+ * - 32 per-CPU interrupts (SGI + PPI)
+ * - a bunch of shared interrups (SPI)
+ */
+struct vgic_bitmap {
+ union {
+ u32 reg[1];
+ unsigned long reg_ul[0];
+ } percpu[VGIC_MAX_CPUS];
+ union {
+ u32 reg[VGIC_NR_SHARED_IRQS / 32];
+ unsigned long reg_ul[0];
+ } shared;
+};
+
+static inline u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
+ int cpuid, u32 offset)
+{
+ offset >>= 2;
+ BUG_ON(offset > (VGIC_NR_IRQS / 32));
+ if (!offset)
+ return x->percpu[cpuid].reg;
+ else
+ return x->shared.reg + offset - 1;
+}
+
+static inline int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
+ int cpuid, int irq)
+{
+ if (irq < 32)
+ return test_bit(irq, x->percpu[cpuid].reg_ul);
+
+ return test_bit(irq - 32, x->shared.reg_ul);
+}
+
+static inline void vgic_bitmap_set_irq_val(struct vgic_bitmap *x,
+ int cpuid, int irq, int val)
+{
+ unsigned long *reg;
+
+ if (irq < 32)
+ reg = x->percpu[cpuid].reg_ul;
+ else {
+ reg = x->shared.reg_ul;
+ irq -= 32;
+ }
+
+ if (val)
+ set_bit(irq, reg);
+ else
+ clear_bit(irq, reg);
+}
+
+static inline unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x,
+ int cpuid)
+{
+ if (unlikely(cpuid >= VGIC_MAX_CPUS))
+ return NULL;
+ return x->percpu[cpuid].reg_ul;
+}
+
+static inline unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
+{
+ return x->shared.reg_ul;
+}
+
+struct vgic_bytemap {
+ union {
+ u32 reg[8];
+ unsigned long reg_ul[0];
+ } percpu[VGIC_MAX_CPUS];
+ union {
+ u32 reg[VGIC_NR_SHARED_IRQS / 4];
+ unsigned long reg_ul[0];
+ } shared;
+};
+
+static inline u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x,
+ int cpuid, u32 offset)
+{
+ offset >>= 2;
+ BUG_ON(offset > (VGIC_NR_IRQS / 4));
+ if (offset < 4)
+ return x->percpu[cpuid].reg + offset;
+ else
+ return x->shared.reg + offset - 8;
+}
+
+static inline int vgic_bytemap_get_irq_val(struct vgic_bytemap *x,
+ int cpuid, int irq)
+{
+ u32 *reg, shift;
+ shift = (irq & 3) * 8;
+ reg = vgic_bytemap_get_reg(x, cpuid, irq);
+ return (*reg >> shift) & 0xff;
+}
+
+static inline void vgic_bytemap_set_irq_val(struct vgic_bytemap *x,
+ int cpuid, int irq, int val)
+{
+ u32 *reg, shift;
+ shift = (irq & 3) * 8;
+ reg = vgic_bytemap_get_reg(x, cpuid, irq);
+ *reg &= ~(0xff << shift);
+ *reg |= (val & 0xff) << shift;
+}
+
struct vgic_dist {
+#ifdef CONFIG_KVM_ARM_VGIC
+ spinlock_t lock;
+
+ /* Virtual control interface mapping */
+ void __iomem *vctrl_base;
+
/* Distributor and vcpu interface mapping in the guest */
phys_addr_t vgic_dist_base;
phys_addr_t vgic_cpu_base;
+
+ /* Distributor enabled */
+ u32 enabled;
+
+ /* Interrupt enabled (one bit per IRQ) */
+ struct vgic_bitmap irq_enabled;
+
+ /* Interrupt 'pin' level */
+ struct vgic_bitmap irq_state;
+
+ /* Level-triggered interrupt in progress */
+ struct vgic_bitmap irq_active;
+
+ /* Interrupt priority. Not used yet. */
+ struct vgic_bytemap irq_priority;
+
+ /* Level/edge triggered */
+ struct vgic_bitmap irq_cfg;
+
+ /* Source CPU per SGI and target CPU */
+ u8 irq_sgi_sources[VGIC_MAX_CPUS][16];
+
+ /* Target CPU for each IRQ */
+ u8 irq_spi_cpu[VGIC_NR_SHARED_IRQS];
+ struct vgic_bitmap irq_spi_target[VGIC_MAX_CPUS];
+
+ /* Bitmap indicating which CPU has something pending */
+ unsigned long irq_pending_on_cpu;
+#endif
};
struct vgic_cpu {
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index f85b275..82feee8 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -22,6 +22,42 @@
#include <linux/io.h>
#include <asm/kvm_emulate.h>
+/*
+ * How the whole thing works (courtesy of Christoffer Dall):
+ *
+ * - At any time, the dist->irq_pending_on_cpu is the oracle that knows if
+ * something is pending
+ * - VGIC pending interrupts are stored on the vgic.irq_state vgic
+ * bitmap (this bitmap is updated by both user land ioctls and guest
+ * mmio ops) and indicate the 'wire' state.
+ * - Every time the bitmap changes, the irq_pending_on_cpu oracle is
+ * recalculated
+ * - To calculate the oracle, we need info for each cpu from
+ * compute_pending_for_cpu, which considers:
+ * - PPI: dist->irq_state & dist->irq_enable
+ * - SPI: dist->irq_state & dist->irq_enable & dist->irq_spi_target
+ * - irq_spi_target is a 'formatted' version of the GICD_ICFGR
+ * registers, stored on each vcpu. We only keep one bit of
+ * information per interrupt, making sure that only one vcpu can
+ * accept the interrupt.
+ * - The same is true when injecting an interrupt, except that we only
+ * consider a single interrupt at a time. The irq_spi_cpu array
+ * contains the target CPU for each SPI.
+ *
+ * The handling of level interrupts adds some extra complexity. We
+ * need to track when the interrupt has been EOIed, so we can sample
+ * the 'line' again. This is achieved as such:
+ *
+ * - When a level interrupt is moved onto a vcpu, the corresponding
+ * bit in irq_active is set. As long as this bit is set, the line
+ * will be ignored for further interrupts. The interrupt is injected
+ * into the vcpu with the VGIC_LR_EOI bit set (generate a
+ * maintenance interrupt on EOI).
+ * - When the interrupt is EOIed, the maintenance interrupt fires,
+ * and clears the corresponding bit in irq_active. This allow the
+ * interrupt line to be sampled again.
+ */
+
#define VGIC_ADDR_UNDEF (-1)
#define IS_VGIC_ADDR_UNDEF(_x) ((_x) == (typeof(_x))VGIC_ADDR_UNDEF)
@@ -38,6 +74,14 @@
#define ACCESS_WRITE_VALUE (3 << 1)
#define ACCESS_WRITE_MASK(x) ((x) & (3 << 1))
+static void vgic_update_state(struct kvm *kvm);
+static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
+
+static inline int vgic_irq_is_edge(struct vgic_dist *dist, int irq)
+{
+ return vgic_bitmap_get_irq_val(&dist->irq_cfg, 0, irq);
+}
+
/**
* vgic_reg_access - access vgic register
* @mmio: pointer to the data describing the mmio access
@@ -101,6 +145,280 @@ static void vgic_reg_access(struct kvm_exit_mmio *mmio, u32 *reg,
}
}
+static bool handle_mmio_misc(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 reg;
+ u32 u32off = offset & 3;
+
+ switch (offset & ~3) {
+ case 0: /* CTLR */
+ reg = vcpu->kvm->arch.vgic.enabled;
+ vgic_reg_access(mmio, ®, u32off,
+ ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
+ if (mmio->is_write) {
+ vcpu->kvm->arch.vgic.enabled = reg & 1;
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+ break;
+
+ case 4: /* TYPER */
+ reg = (atomic_read(&vcpu->kvm->online_vcpus) - 1) << 5;
+ reg |= (VGIC_NR_IRQS >> 5) - 1;
+ vgic_reg_access(mmio, ®, u32off,
+ ACCESS_READ_VALUE | ACCESS_WRITE_IGNORED);
+ break;
+
+ case 8: /* IIDR */
+ reg = 0x4B00043B;
+ vgic_reg_access(mmio, ®, u32off,
+ ACCESS_READ_VALUE | ACCESS_WRITE_IGNORED);
+ break;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_raz_wi(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ vgic_reg_access(mmio, NULL, offset,
+ ACCESS_READ_RAZ | ACCESS_WRITE_IGNORED);
+ return false;
+}
+
+static bool handle_mmio_set_enable_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_enabled,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_SETBIT);
+ if (mmio->is_write) {
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_clear_enable_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_enabled,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_CLEARBIT);
+ if (mmio->is_write) {
+ if (offset < 4) /* Force SGI enabled */
+ *reg |= 0xffff;
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_set_pending_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_state,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_SETBIT);
+ if (mmio->is_write) {
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_clear_pending_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_state,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_CLEARBIT);
+ if (mmio->is_write) {
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_priority_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bytemap_get_reg(&vcpu->kvm->arch.vgic.irq_priority,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
+ return false;
+}
+
+static u32 vgic_get_target_reg(struct kvm *kvm, int irq)
+{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ struct kvm_vcpu *vcpu;
+ int i, c;
+ unsigned long *bmap;
+ u32 val = 0;
+
+ BUG_ON(irq & 3);
+ BUG_ON(irq < 32);
+
+ irq -= 32;
+
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ bmap = vgic_bitmap_get_shared_map(&dist->irq_spi_target[c]);
+ for (i = 0; i < 4; i++)
+ if (test_bit(irq + i, bmap))
+ val |= 1 << (c + i * 8);
+ }
+
+ return val;
+}
+
+static void vgic_set_target_reg(struct kvm *kvm, u32 val, int irq)
+{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ struct kvm_vcpu *vcpu;
+ int i, c;
+ unsigned long *bmap;
+ u32 target;
+
+ BUG_ON(irq & 3);
+ BUG_ON(irq < 32);
+
+ irq -= 32;
+
+ /*
+ * Pick the LSB in each byte. This ensures we target exactly
+ * one vcpu per IRQ. If the byte is null, assume we target
+ * CPU0.
+ */
+ for (i = 0; i < 4; i++) {
+ int shift = i * 8;
+ target = ffs((val >> shift) & 0xffU);
+ target = target ? (target - 1) : 0;
+ dist->irq_spi_cpu[irq + i] = target;
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ bmap = vgic_bitmap_get_shared_map(&dist->irq_spi_target[c]);
+ if (c == target)
+ set_bit(irq + i, bmap);
+ else
+ clear_bit(irq + i, bmap);
+ }
+ }
+}
+
+static bool handle_mmio_target_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 reg;
+
+ /* We treat the banked interrupts targets as read-only */
+ if (offset < 32) {
+ u32 roreg = 1 << vcpu->vcpu_id;
+ roreg |= roreg << 8;
+ roreg |= roreg << 16;
+
+ vgic_reg_access(mmio, &roreg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_IGNORED);
+ return false;
+ }
+
+ reg = vgic_get_target_reg(vcpu->kvm, offset & ~3U);
+ vgic_reg_access(mmio, ®, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
+ if (mmio->is_write) {
+ vgic_set_target_reg(vcpu->kvm, reg, offset & ~3U);
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static u32 vgic_cfg_expand(u16 val)
+{
+ u32 res = 0;
+ int i;
+
+ for (i = 0; i < 16; i++)
+ res |= (val >> i) << (2 * i + 1);
+
+ return res;
+}
+
+static u16 vgic_cfg_compress(u32 val)
+{
+ u16 res = 0;
+ int i;
+
+ for (i = 0; i < 16; i++)
+ res |= (val >> (i * 2 + 1)) << i;
+
+ return res;
+}
+
+/*
+ * The distributor uses 2 bits per IRQ for the CFG register, but the
+ * LSB is always 0. As such, we only keep the upper bit, and use the
+ * two above functions to compress/expand the bits
+ */
+static bool handle_mmio_cfg_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 val;
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_cfg,
+ vcpu->vcpu_id, offset >> 1);
+ if (offset & 2)
+ val = *reg >> 16;
+ else
+ val = *reg & 0xffff;
+
+ val = vgic_cfg_expand(val);
+ vgic_reg_access(mmio, &val, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
+ if (mmio->is_write) {
+ if (offset < 4) {
+ *reg = ~0U; /* Force PPIs/SGIs to 1 */
+ return false;
+ }
+
+ val = vgic_cfg_compress(val);
+ if (offset & 2) {
+ *reg &= 0xffff;
+ *reg |= val << 16;
+ } else {
+ *reg &= 0xffff << 16;
+ *reg |= val;
+ }
+ }
+
+ return false;
+}
+
+static bool handle_mmio_sgi_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 reg;
+ vgic_reg_access(mmio, ®, offset,
+ ACCESS_READ_RAZ | ACCESS_WRITE_VALUE);
+ if (mmio->is_write) {
+ vgic_dispatch_sgi(vcpu, reg);
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
/* All this should be handled by kvm_bus_io_*()... FIXME!!! */
struct mmio_range {
unsigned long base;
@@ -110,6 +428,66 @@ struct mmio_range {
};
static const struct mmio_range vgic_ranges[] = {
+ { /* CTRL, TYPER, IIDR */
+ .base = 0,
+ .len = 12,
+ .handle_mmio = handle_mmio_misc,
+ },
+ { /* IGROUPRn */
+ .base = 0x80,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_raz_wi,
+ },
+ { /* ISENABLERn */
+ .base = 0x100,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_set_enable_reg,
+ },
+ { /* ICENABLERn */
+ .base = 0x180,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_clear_enable_reg,
+ },
+ { /* ISPENDRn */
+ .base = 0x200,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_set_pending_reg,
+ },
+ { /* ICPENDRn */
+ .base = 0x280,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_clear_pending_reg,
+ },
+ { /* ISACTIVERn */
+ .base = 0x300,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_raz_wi,
+ },
+ { /* ICACTIVERn */
+ .base = 0x380,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_raz_wi,
+ },
+ { /* IPRIORITYRn */
+ .base = 0x400,
+ .len = VGIC_NR_IRQS,
+ .handle_mmio = handle_mmio_priority_reg,
+ },
+ { /* ITARGETSRn */
+ .base = 0x800,
+ .len = VGIC_NR_IRQS,
+ .handle_mmio = handle_mmio_target_reg,
+ },
+ { /* ICFGRn */
+ .base = 0xC00,
+ .len = VGIC_NR_IRQS / 4,
+ .handle_mmio = handle_mmio_cfg_reg,
+ },
+ { /* SGIRn */
+ .base = 0xF00,
+ .len = 4,
+ .handle_mmio = handle_mmio_sgi_reg,
+ },
{}
};
@@ -141,7 +519,98 @@ struct mmio_range *find_matching_range(const struct mmio_range *ranges,
*/
bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exit_mmio *mmio)
{
- return KVM_EXIT_MMIO;
+ const struct mmio_range *range;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ unsigned long base = dist->vgic_dist_base;
+ bool updated_state;
+
+ if (!irqchip_in_kernel(vcpu->kvm) ||
+ mmio->phys_addr < base ||
+ (mmio->phys_addr + mmio->len) > (base + dist->vgic_dist_size))
+ return false;
+
+ range = find_matching_range(vgic_ranges, mmio, base);
+ if (unlikely(!range || !range->handle_mmio)) {
+ pr_warn("Unhandled access %d %08llx %d\n",
+ mmio->is_write, mmio->phys_addr, mmio->len);
+ return false;
+ }
+
+ spin_lock(&vcpu->kvm->arch.vgic.lock);
+ updated_state = range->handle_mmio(vcpu, mmio,mmio->phys_addr - range->base - base);
+ spin_unlock(&vcpu->kvm->arch.vgic.lock);
+ kvm_prepare_mmio(run, mmio);
+ kvm_handle_mmio_return(vcpu, run);
+
+ return true;
+}
+
+static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ int nrcpus = atomic_read(&kvm->online_vcpus);
+ u8 target_cpus;
+ int sgi, mode, c, vcpu_id;
+
+ vcpu_id = vcpu->vcpu_id;
+
+ sgi = reg & 0xf;
+ target_cpus = (reg >> 16) & 0xff;
+ mode = (reg >> 24) & 3;
+
+ switch (mode) {
+ case 0:
+ if (!target_cpus)
+ return;
+
+ case 1:
+ target_cpus = ((1 << nrcpus) - 1) & ~(1 << vcpu_id) & 0xff;
+ break;
+
+ case 2:
+ target_cpus = 1 << vcpu_id;
+ break;
+ }
+
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ if (target_cpus & 1) {
+ /* Flag the SGI as pending */
+ vgic_bitmap_set_irq_val(&dist->irq_state, c, sgi, 1);
+ dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
+ kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
+ }
+
+ target_cpus >>= 1;
+ }
+}
+
+static int compute_pending_for_cpu(struct kvm_vcpu *vcpu)
+{
+ return 0;
+}
+
+/*
+ * Update the interrupt state and determine which CPUs have pending
+ * interrupts. Must be called with distributor lock held.
+ */
+static void vgic_update_state(struct kvm *kvm)
+{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ struct kvm_vcpu *vcpu;
+ int c;
+
+ if (!dist->enabled) {
+ set_bit(0, &dist->irq_pending_on_cpu);
+ return;
+ }
+
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ if (compute_pending_for_cpu(vcpu)) {
+ pr_debug("CPU%d has pending interrupts\n", c);
+ set_bit(c, &dist->irq_pending_on_cpu);
+ }
+ }
}
static bool vgic_ioaddr_overlap(struct kvm *kvm)
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 07/13] ARM: KVM: VGIC virtual CPU interface management
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (5 preceding siblings ...)
2012-10-22 6:51 ` [PATCH v3 06/13] ARM: KVM: VGIC distributor handling Christoffer Dall
@ 2012-10-22 6:52 ` Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 08/13] ARM: KVM: vgic: retire queued, disabled interrupts Christoffer Dall
` (5 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:52 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Add VGIC virtual CPU interface code, picking pending interrupts
from the distributor and stashing them in the VGIC control interface
list registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 41 +++++++
arch/arm/kvm/vgic.c | 226 +++++++++++++++++++++++++++++++++++++++
2 files changed, 266 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index 9e60b1d..7229324 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -193,8 +193,45 @@ struct vgic_dist {
};
struct vgic_cpu {
+#ifdef CONFIG_KVM_ARM_VGIC
+ /* per IRQ to LR mapping */
+ u8 vgic_irq_lr_map[VGIC_NR_IRQS];
+
+ /* Pending interrupts on this VCPU */
+ DECLARE_BITMAP( pending, VGIC_NR_IRQS);
+
+ /* Bitmap of used/free list registers */
+ DECLARE_BITMAP( lr_used, 64);
+
+ /* Number of list registers on this CPU */
+ int nr_lr;
+
+ /* CPU vif control registers for world switch */
+ u32 vgic_hcr;
+ u32 vgic_vmcr;
+ u32 vgic_misr; /* Saved only */
+ u32 vgic_eisr[2]; /* Saved only */
+ u32 vgic_elrsr[2]; /* Saved only */
+ u32 vgic_apr;
+ u32 vgic_lr[64]; /* A15 has only 4... */
+#endif
};
+#define VGIC_HCR_EN (1 << 0)
+#define VGIC_HCR_UIE (1 << 1)
+
+#define VGIC_LR_VIRTUALID (0x3ff << 0)
+#define VGIC_LR_PHYSID_CPUID (7 << 10)
+#define VGIC_LR_STATE (3 << 28)
+#define VGIC_LR_PENDING_BIT (1 << 28)
+#define VGIC_LR_ACTIVE_BIT (1 << 29)
+#define VGIC_LR_EOI (1 << 19)
+
+#define VGIC_MISR_EOI (1 << 0)
+#define VGIC_MISR_U (1 << 1)
+
+#define LR_EMPTY 0xff
+
struct kvm;
struct kvm_vcpu;
struct kvm_run;
@@ -202,9 +239,13 @@ struct kvm_exit_mmio;
#ifdef CONFIG_KVM_ARM_VGIC
int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr);
+void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu);
+void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu);
+int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
struct kvm_exit_mmio *mmio);
+#define irqchip_in_kernel(k) (!!((k)->arch.vgic.vctrl_base))
#else
static inline int kvm_vgic_hyp_init(void)
{
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index 82feee8..d7cdec5 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -587,7 +587,25 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
static int compute_pending_for_cpu(struct kvm_vcpu *vcpu)
{
- return 0;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ unsigned long *pending, *enabled, *pend;
+ int vcpu_id;
+
+ vcpu_id = vcpu->vcpu_id;
+ pend = vcpu->arch.vgic_cpu.pending;
+
+ pending = vgic_bitmap_get_cpu_map(&dist->irq_state, vcpu_id);
+ enabled = vgic_bitmap_get_cpu_map(&dist->irq_enabled, vcpu_id);
+ bitmap_and(pend, pending, enabled, 32);
+
+ pending = vgic_bitmap_get_shared_map(&dist->irq_state);
+ enabled = vgic_bitmap_get_shared_map(&dist->irq_enabled);
+ bitmap_and(pend + 1, pending, enabled, VGIC_NR_SHARED_IRQS);
+ bitmap_and(pend + 1, pend + 1,
+ vgic_bitmap_get_shared_map(&dist->irq_spi_target[vcpu_id]),
+ VGIC_NR_SHARED_IRQS);
+
+ return (find_first_bit(pend, VGIC_NR_IRQS) < VGIC_NR_IRQS);
}
/*
@@ -613,6 +631,212 @@ static void vgic_update_state(struct kvm *kvm)
}
}
+#define LR_PHYSID(lr) (((lr) & VGIC_LR_PHYSID_CPUID) >> 10)
+#define MK_LR_PEND(src, irq) (VGIC_LR_PENDING_BIT | ((src) << 10) | (irq))
+/*
+ * Queue an interrupt to a CPU virtual interface. Return true on success,
+ * or false if it wasn't possible to queue it.
+ */
+static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
+{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ int lr, is_level;
+
+ /* Sanitize the input... */
+ BUG_ON(sgi_source_id & ~7);
+ BUG_ON(sgi_source_id && irq > 15);
+ BUG_ON(irq >= VGIC_NR_IRQS);
+
+ kvm_debug("Queue IRQ%d\n", irq);
+
+ lr = vgic_cpu->vgic_irq_lr_map[irq];
+ is_level = !vgic_irq_is_edge(dist, irq);
+
+ /* Do we have an active interrupt for the same CPUID? */
+ if (lr != LR_EMPTY &&
+ (LR_PHYSID(vgic_cpu->vgic_lr[lr]) == sgi_source_id)) {
+ kvm_debug("LR%d piggyback for IRQ%d %x\n", lr, irq, vgic_cpu->vgic_lr[lr]);
+ BUG_ON(!test_bit(lr, vgic_cpu->lr_used));
+ vgic_cpu->vgic_lr[lr] |= VGIC_LR_PENDING_BIT;
+ if (is_level)
+ vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI;
+ return true;
+ }
+
+ /* Try to use another LR for this interrupt */
+ lr = find_first_bit((unsigned long *)vgic_cpu->vgic_elrsr,
+ vgic_cpu->nr_lr);
+ if (lr >= vgic_cpu->nr_lr)
+ return false;
+
+ kvm_debug("LR%d allocated for IRQ%d %x\n", lr, irq, sgi_source_id);
+ vgic_cpu->vgic_lr[lr] = MK_LR_PEND(sgi_source_id, irq);
+ if (is_level)
+ vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI;
+
+ vgic_cpu->vgic_irq_lr_map[irq] = lr;
+ clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
+ set_bit(lr, vgic_cpu->lr_used);
+
+ return true;
+}
+
+/*
+ * Fill the list registers with pending interrupts before running the
+ * guest.
+ */
+static void __kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu)
+{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ unsigned long *pending;
+ int i, c, vcpu_id;
+ int overflow = 0;
+
+ vcpu_id = vcpu->vcpu_id;
+
+ /*
+ * We may not have any pending interrupt, or the interrupts
+ * may have been serviced from another vcpu. In all cases,
+ * move along.
+ */
+ if (!kvm_vgic_vcpu_pending_irq(vcpu)) {
+ pr_debug("CPU%d has no pending interrupt\n", vcpu_id);
+ goto epilog;
+ }
+
+ /* SGIs */
+ pending = vgic_bitmap_get_cpu_map(&dist->irq_state, vcpu_id);
+ for_each_set_bit(i, vgic_cpu->pending, 16) {
+ unsigned long sources;
+
+ sources = dist->irq_sgi_sources[vcpu_id][i];
+ for_each_set_bit(c, &sources, 8) {
+ if (!vgic_queue_irq(vcpu, c, i)) {
+ overflow = 1;
+ continue;
+ }
+
+ clear_bit(c, &sources);
+ }
+
+ if (!sources)
+ clear_bit(i, pending);
+
+ dist->irq_sgi_sources[vcpu_id][i] = sources;
+ }
+
+ /* PPIs */
+ for_each_set_bit_from(i, vgic_cpu->pending, 32) {
+ if (!vgic_queue_irq(vcpu, 0, i)) {
+ overflow = 1;
+ continue;
+ }
+
+ clear_bit(i, pending);
+ }
+
+
+ /* SPIs */
+ pending = vgic_bitmap_get_shared_map(&dist->irq_state);
+ for_each_set_bit_from(i, vgic_cpu->pending, VGIC_NR_IRQS) {
+ if (vgic_bitmap_get_irq_val(&dist->irq_active, 0, i))
+ continue; /* level interrupt, already queued */
+
+ if (!vgic_queue_irq(vcpu, 0, i)) {
+ overflow = 1;
+ continue;
+ }
+
+ /* Immediate clear on edge, set active on level */
+ if (vgic_irq_is_edge(dist, i)) {
+ clear_bit(i - 32, pending);
+ clear_bit(i, vgic_cpu->pending);
+ } else {
+ vgic_bitmap_set_irq_val(&dist->irq_active, 0, i, 1);
+ }
+ }
+
+epilog:
+ if (overflow)
+ vgic_cpu->vgic_hcr |= VGIC_HCR_UIE;
+ else {
+ vgic_cpu->vgic_hcr &= ~VGIC_HCR_UIE;
+ /*
+ * We're about to run this VCPU, and we've consumed
+ * everything the distributor had in store for
+ * us. Claim we don't have anything pending. We'll
+ * adjust that if needed while exiting.
+ */
+ clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
+ }
+}
+
+/*
+ * Sync back the VGIC state after a guest run. We do not really touch
+ * the distributor here (the irq_pending_on_cpu bit is safe to set),
+ * so there is no need for taking its lock.
+ */
+static void __kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu)
+{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ int lr, pending;
+
+ /* Clear mappings for empty LRs */
+ for_each_set_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr,
+ vgic_cpu->nr_lr) {
+ int irq;
+
+ if (!test_and_clear_bit(lr, vgic_cpu->lr_used))
+ continue;
+
+ irq = vgic_cpu->vgic_lr[lr] & VGIC_LR_VIRTUALID;
+
+ BUG_ON(irq >= VGIC_NR_IRQS);
+ vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY;
+ }
+
+ /* Check if we still have something up our sleeve... */
+ pending = find_first_zero_bit((unsigned long *)vgic_cpu->vgic_elrsr,
+ vgic_cpu->nr_lr);
+ if (pending < vgic_cpu->nr_lr) {
+ set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
+ smp_mb();
+ }
+}
+
+void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu)
+{
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
+ if (!irqchip_in_kernel(vcpu->kvm))
+ return;
+
+ spin_lock(&dist->lock);
+ __kvm_vgic_sync_to_cpu(vcpu);
+ spin_unlock(&dist->lock);
+}
+
+void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu)
+{
+ if (!irqchip_in_kernel(vcpu->kvm))
+ return;
+
+ __kvm_vgic_sync_from_cpu(vcpu);
+}
+
+int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
+{
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
+ if (!irqchip_in_kernel(vcpu->kvm))
+ return 0;
+
+ return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
+}
+
static bool vgic_ioaddr_overlap(struct kvm *kvm)
{
phys_addr_t dist = kvm->arch.vgic.vgic_dist_base;
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 08/13] ARM: KVM: vgic: retire queued, disabled interrupts
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (6 preceding siblings ...)
2012-10-22 6:52 ` [PATCH v3 07/13] ARM: KVM: VGIC virtual CPU interface management Christoffer Dall
@ 2012-10-22 6:52 ` Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 09/13] ARM: KVM: VGIC interrupt injection Christoffer Dall
` (4 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:52 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
An interrupt may have been disabled after being made pending on the
CPU interface (the classic case is a timer running while we're
rebooting the guest - the interrupt would kick as soon as the CPU
interface gets enabled, with deadly consequences).
The solution is to examine already active LRs, and check the
interrupt is still enabled. If not, just retire it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/kvm/vgic.c | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index d7cdec5..dda5623 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -633,6 +633,34 @@ static void vgic_update_state(struct kvm *kvm)
#define LR_PHYSID(lr) (((lr) & VGIC_LR_PHYSID_CPUID) >> 10)
#define MK_LR_PEND(src, irq) (VGIC_LR_PENDING_BIT | ((src) << 10) | (irq))
+
+/*
+ * An interrupt may have been disabled after being made pending on the
+ * CPU interface (the classic case is a timer running while we're
+ * rebooting the guest - the interrupt would kick as soon as the CPU
+ * interface gets enabled, with deadly consequences).
+ *
+ * The solution is to examine already active LRs, and check the
+ * interrupt is still enabled. If not, just retire it.
+ */
+static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu)
+{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ int lr;
+
+ for_each_set_bit(lr, vgic_cpu->lr_used, vgic_cpu->nr_lr) {
+ int irq = vgic_cpu->vgic_lr[lr] & VGIC_LR_VIRTUALID;
+
+ if (!vgic_bitmap_get_irq_val(&dist->irq_enabled,
+ vcpu->vcpu_id, irq)) {
+ vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY;
+ clear_bit(lr, vgic_cpu->lr_used);
+ vgic_cpu->vgic_lr[lr] &= ~VGIC_LR_STATE;
+ }
+ }
+}
+
/*
* Queue an interrupt to a CPU virtual interface. Return true on success,
* or false if it wasn't possible to queue it.
@@ -696,6 +724,8 @@ static void __kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu)
vcpu_id = vcpu->vcpu_id;
+ vgic_retire_disabled_irqs(vcpu);
+
/*
* We may not have any pending interrupt, or the interrupts
* may have been serviced from another vcpu. In all cases,
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 09/13] ARM: KVM: VGIC interrupt injection
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (7 preceding siblings ...)
2012-10-22 6:52 ` [PATCH v3 08/13] ARM: KVM: vgic: retire queued, disabled interrupts Christoffer Dall
@ 2012-10-22 6:52 ` Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 10/13] ARM: KVM: VGIC control interface world switch Christoffer Dall
` (3 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:52 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Plug the interrupt injection code. Interrupts can now be generated
from user space.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 8 +++
arch/arm/kvm/arm.c | 29 +++++++++++++
arch/arm/kvm/vgic.c | 90 +++++++++++++++++++++++++++++++++++++++
3 files changed, 127 insertions(+)
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index 7229324..6e3d303 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -241,6 +241,8 @@ struct kvm_exit_mmio;
int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr);
void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu);
void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu);
+int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
+ bool level);
int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
struct kvm_exit_mmio *mmio);
@@ -271,6 +273,12 @@ static inline void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu) {}
static inline void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu) {}
static inline void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu) {}
+static inline int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid,
+ const struct kvm_irq_level *irq)
+{
+ return 0;
+}
+
static inline int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
{
return 0;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index f92b4ec..877e285 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -763,10 +763,31 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level)
switch (irq_type) {
case KVM_ARM_IRQ_TYPE_CPU:
+ if (irqchip_in_kernel(kvm))
+ return -ENXIO;
+
if (irq_num > KVM_ARM_IRQ_CPU_FIQ)
return -EINVAL;
return vcpu_interrupt_line(vcpu, irq_num, level);
+#ifdef CONFIG_KVM_ARM_VGIC
+ case KVM_ARM_IRQ_TYPE_PPI:
+ if (!irqchip_in_kernel(kvm))
+ return -ENXIO;
+
+ if (irq_num < 16 || irq_num > 31)
+ return -EINVAL;
+
+ return kvm_vgic_inject_irq(kvm, vcpu->vcpu_id, irq_num, level);
+ case KVM_ARM_IRQ_TYPE_SPI:
+ if (!irqchip_in_kernel(kvm))
+ return -ENXIO;
+
+ if (irq_num < 32 || irq_num > KVM_ARM_IRQ_GIC_MAX)
+ return -EINVAL;
+
+ return kvm_vgic_inject_irq(kvm, 0, irq_num, level);
+#endif
}
return -EINVAL;
@@ -848,6 +869,14 @@ long kvm_arch_vm_ioctl(struct file *filp,
void __user *argp = (void __user *)arg;
switch (ioctl) {
+#ifdef CONFIG_KVM_ARM_VGIC
+ case KVM_CREATE_IRQCHIP: {
+ if (vgic_present)
+ return kvm_vgic_create(kvm);
+ else
+ return -EINVAL;
+ }
+#endif
case KVM_SET_DEVICE_ADDRESS: {
struct kvm_device_address dev_addr;
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index dda5623..70040bb 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -75,6 +75,7 @@
#define ACCESS_WRITE_MASK(x) ((x) & (3 << 1))
static void vgic_update_state(struct kvm *kvm);
+static void vgic_kick_vcpus(struct kvm *kvm);
static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
static inline int vgic_irq_is_edge(struct vgic_dist *dist, int irq)
@@ -542,6 +543,9 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exi
kvm_prepare_mmio(run, mmio);
kvm_handle_mmio_return(vcpu, run);
+ if (updated_state)
+ vgic_kick_vcpus(vcpu->kvm);
+
return true;
}
@@ -867,6 +871,92 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
}
+static void vgic_kick_vcpus(struct kvm *kvm)
+{
+ struct kvm_vcpu *vcpu;
+ int c;
+
+ /*
+ * We've injected an interrupt, time to find out who deserves
+ * a good kick...
+ */
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ if (kvm_vgic_vcpu_pending_irq(vcpu))
+ kvm_vcpu_kick(vcpu);
+ }
+}
+
+static bool vgic_update_irq_state(struct kvm *kvm, int cpuid,
+ unsigned int irq_num, bool level)
+{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ struct kvm_vcpu *vcpu;
+ int is_edge, is_level, state;
+ int enabled;
+ bool ret = true;
+
+ spin_lock(&dist->lock);
+
+ is_edge = vgic_irq_is_edge(dist, irq_num);
+ is_level = !is_edge;
+ state = vgic_bitmap_get_irq_val(&dist->irq_state, cpuid, irq_num);
+
+ /*
+ * Only inject an interrupt if:
+ * - level triggered and we change level
+ * - edge triggered and we have a rising edge
+ */
+ if ((is_level && !(state ^ level)) || (is_edge && (state || !level))) {
+ ret = false;
+ goto out;
+ }
+
+ vgic_bitmap_set_irq_val(&dist->irq_state, cpuid, irq_num, level);
+
+ enabled = vgic_bitmap_get_irq_val(&dist->irq_enabled, cpuid, irq_num);
+
+ if (!enabled) {
+ ret = false;
+ goto out;
+ }
+
+ if (is_level && vgic_bitmap_get_irq_val(&dist->irq_active,
+ cpuid, irq_num)) {
+ /*
+ * Level interrupt in progress, will be picked up
+ * when EOId.
+ */
+ ret = false;
+ goto out;
+ }
+
+ if (irq_num >= 32)
+ cpuid = dist->irq_spi_cpu[irq_num - 32];
+
+ kvm_debug("Inject IRQ%d level %d CPU%d\n", irq_num, level, cpuid);
+
+ vcpu = kvm_get_vcpu(kvm, cpuid);
+
+ if (level) {
+ set_bit(irq_num, vcpu->arch.vgic_cpu.pending);
+ set_bit(cpuid, &dist->irq_pending_on_cpu);
+ }
+
+out:
+ spin_unlock(&dist->lock);
+
+ return ret;
+}
+
+int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
+ bool level)
+{
+ if (vgic_update_irq_state(kvm, cpuid, irq_num, level))
+ vgic_kick_vcpus(kvm);
+
+ return 0;
+}
+
static bool vgic_ioaddr_overlap(struct kvm *kvm)
{
phys_addr_t dist = kvm->arch.vgic.vgic_dist_base;
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 10/13] ARM: KVM: VGIC control interface world switch
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (8 preceding siblings ...)
2012-10-22 6:52 ` [PATCH v3 09/13] ARM: KVM: VGIC interrupt injection Christoffer Dall
@ 2012-10-22 6:52 ` Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 11/13] ARM: KVM: VGIC initialisation code Christoffer Dall
` (2 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:52 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Enable the VGIC control interface to be save-restored on world switch.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_arm.h | 12 +++++++
arch/arm/kernel/asm-offsets.c | 12 +++++++
arch/arm/kvm/interrupts_head.S | 68 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 92 insertions(+)
diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
index 4f1bb01..e1e39d6 100644
--- a/arch/arm/include/asm/kvm_arm.h
+++ b/arch/arm/include/asm/kvm_arm.h
@@ -188,4 +188,16 @@
#define HSR_EC_DABT (0x24)
#define HSR_EC_DABT_HYP (0x25)
+/* GICH offsets */
+#define GICH_HCR 0x0
+#define GICH_VTR 0x4
+#define GICH_VMCR 0x8
+#define GICH_MISR 0x10
+#define GICH_EISR0 0x20
+#define GICH_EISR1 0x24
+#define GICH_ELRSR0 0x30
+#define GICH_ELRSR1 0x34
+#define GICH_APR 0xf0
+#define GICH_LR0 0x100
+
#endif /* __ARM_KVM_ARM_H__ */
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index cf97d92..fba332b 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -167,6 +167,18 @@ int main(void)
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.hxfar));
DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.hpfar));
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.hyp_pc));
+#ifdef CONFIG_KVM_ARM_VGIC
+ DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
+ DEFINE(VGIC_CPU_HCR, offsetof(struct vgic_cpu, vgic_hcr));
+ DEFINE(VGIC_CPU_VMCR, offsetof(struct vgic_cpu, vgic_vmcr));
+ DEFINE(VGIC_CPU_MISR, offsetof(struct vgic_cpu, vgic_misr));
+ DEFINE(VGIC_CPU_EISR, offsetof(struct vgic_cpu, vgic_eisr));
+ DEFINE(VGIC_CPU_ELRSR, offsetof(struct vgic_cpu, vgic_elrsr));
+ DEFINE(VGIC_CPU_APR, offsetof(struct vgic_cpu, vgic_apr));
+ DEFINE(VGIC_CPU_LR, offsetof(struct vgic_cpu, vgic_lr));
+ DEFINE(VGIC_CPU_NR_LR, offsetof(struct vgic_cpu, nr_lr));
+ DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base));
+#endif
DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr));
#endif
return 0;
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 2ac8b4a..c2423d8 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -341,6 +341,45 @@
* @vcpup: Register pointing to VCPU struct
*/
.macro save_vgic_state vcpup
+#ifdef CONFIG_KVM_ARM_VGIC
+ /* Get VGIC VCTRL base into r2 */
+ ldr r2, [\vcpup, #VCPU_KVM]
+ ldr r2, [r2, #KVM_VGIC_VCTRL]
+ cmp r2, #0
+ beq 2f
+
+ /* Compute the address of struct vgic_cpu */
+ add r11, \vcpup, #VCPU_VGIC_CPU
+
+ /* Save all interesting registers */
+ ldr r3, [r2, #GICH_HCR]
+ ldr r4, [r2, #GICH_VMCR]
+ ldr r5, [r2, #GICH_MISR]
+ ldr r6, [r2, #GICH_EISR0]
+ ldr r7, [r2, #GICH_EISR1]
+ ldr r8, [r2, #GICH_ELRSR0]
+ ldr r9, [r2, #GICH_ELRSR1]
+ ldr r10, [r2, #GICH_APR]
+
+ str r3, [r11, #VGIC_CPU_HCR]
+ str r4, [r11, #VGIC_CPU_VMCR]
+ str r5, [r11, #VGIC_CPU_MISR]
+ str r6, [r11, #VGIC_CPU_EISR]
+ str r7, [r11, #(VGIC_CPU_EISR + 4)]
+ str r8, [r11, #VGIC_CPU_ELRSR]
+ str r9, [r11, #(VGIC_CPU_ELRSR + 4)]
+ str r10, [r11, #VGIC_CPU_APR]
+
+ /* Save list registers */
+ add r2, r2, #GICH_LR0
+ add r3, r11, #VGIC_CPU_LR
+ ldr r4, [r11, #VGIC_CPU_NR_LR]
+1: ldr r6, [r2], #4
+ str r6, [r3], #4
+ subs r4, r4, #1
+ bne 1b
+2:
+#endif
.endm
/*
@@ -348,6 +387,35 @@
* @vcpup: Register pointing to VCPU struct
*/
.macro restore_vgic_state vcpup
+#ifdef CONFIG_KVM_ARM_VGIC
+ /* Get VGIC VCTRL base into r2 */
+ ldr r2, [\vcpup, #VCPU_KVM]
+ ldr r2, [r2, #KVM_VGIC_VCTRL]
+ cmp r2, #0
+ beq 2f
+
+ /* Compute the address of struct vgic_cpu */
+ add r11, \vcpup, #VCPU_VGIC_CPU
+
+ /* We only restore a minimal set of registers */
+ ldr r3, [r11, #VGIC_CPU_HCR]
+ ldr r4, [r11, #VGIC_CPU_VMCR]
+ ldr r8, [r11, #VGIC_CPU_APR]
+
+ str r3, [r2, #GICH_HCR]
+ str r4, [r2, #GICH_VMCR]
+ str r8, [r2, #GICH_APR]
+
+ /* Restore list registers */
+ add r2, r2, #GICH_LR0
+ add r3, r11, #VGIC_CPU_LR
+ ldr r4, [r11, #VGIC_CPU_NR_LR]
+1: ldr r6, [r3], #4
+ str r6, [r2], #4
+ subs r4, r4, #1
+ bne 1b
+2:
+#endif
.endm
/* Configures the HSTR (Hyp System Trap Register) on entry/return
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 11/13] ARM: KVM: VGIC initialisation code
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (9 preceding siblings ...)
2012-10-22 6:52 ` [PATCH v3 10/13] ARM: KVM: VGIC control interface world switch Christoffer Dall
@ 2012-10-22 6:52 ` Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 12/13] ARM: KVM: vgic: reduce the number of vcpu kick Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 13/13] ARM: KVM: Add VGIC configuration option Christoffer Dall
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:52 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Add the init code for the hypervisor, the virtual machine, and
the virtual CPUs.
An interrupt handler is also wired to allow the VGIC maintenance
interrupts, used to deal with level triggered interrupts and LR
underflows.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 11 ++
arch/arm/kvm/arm.c | 14 ++
arch/arm/kvm/vgic.c | 237 +++++++++++++++++++++++++++++++++++++++
3 files changed, 258 insertions(+), 4 deletions(-)
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index 6e3d303..1287f75 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -154,6 +154,7 @@ static inline void vgic_bytemap_set_irq_val(struct vgic_bytemap *x,
struct vgic_dist {
#ifdef CONFIG_KVM_ARM_VGIC
spinlock_t lock;
+ bool ready;
/* Virtual control interface mapping */
void __iomem *vctrl_base;
@@ -239,6 +240,10 @@ struct kvm_exit_mmio;
#ifdef CONFIG_KVM_ARM_VGIC
int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr);
+int kvm_vgic_hyp_init(void);
+int kvm_vgic_init(struct kvm *kvm);
+int kvm_vgic_create(struct kvm *kvm);
+void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu);
void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu);
int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
@@ -248,6 +253,7 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
struct kvm_exit_mmio *mmio);
#define irqchip_in_kernel(k) (!!((k)->arch.vgic.vctrl_base))
+#define vgic_initialized(k) ((k)->arch.vgic.ready)
#else
static inline int kvm_vgic_hyp_init(void)
{
@@ -294,6 +300,11 @@ static inline int irqchip_in_kernel(struct kvm *kvm)
{
return 0;
}
+
+static inline bool kvm_vgic_initialized(struct kvm *kvm)
+{
+ return true;
+}
#endif
#endif
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 877e285..d367831 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -187,6 +187,8 @@ int kvm_dev_ioctl_check_extension(long ext)
switch (ext) {
#ifdef CONFIG_KVM_ARM_VGIC
case KVM_CAP_IRQCHIP:
+ r = vgic_present;
+ break;
#endif
case KVM_CAP_USER_MEMORY:
case KVM_CAP_DESTROY_MEMORY_REGION_WORKS:
@@ -622,6 +624,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
if (unlikely(vcpu->arch.target < 0))
return -ENOEXEC;
+ /* Initalize the VGIC before running a vcpu the first time on this VM */
+ if (unlikely(irqchip_in_kernel(vcpu->kvm) &&
+ !vgic_initialized(vcpu->kvm))) {
+ ret = kvm_vgic_init(vcpu->kvm);
+ if (ret)
+ return ret;
+ }
+
if (run->exit_reason == KVM_EXIT_MMIO) {
ret = kvm_handle_mmio_return(vcpu, vcpu->run);
if (ret)
@@ -1023,8 +1033,8 @@ static int init_hyp_mode(void)
* Init HYP view of VGIC
*/
err = kvm_vgic_hyp_init();
- if (err)
- goto out_free_mappings;
+ if (!err)
+ vgic_present = true;
return 0;
out_free_vfp:
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index 70040bb..415ddb8 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -20,7 +20,14 @@
#include <linux/kvm_host.h>
#include <linux/interrupt.h>
#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+
#include <asm/kvm_emulate.h>
+#include <asm/hardware/gic.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_mmu.h>
/*
* How the whole thing works (courtesy of Christoffer Dall):
@@ -59,11 +66,18 @@
*/
#define VGIC_ADDR_UNDEF (-1)
-#define IS_VGIC_ADDR_UNDEF(_x) ((_x) == (typeof(_x))VGIC_ADDR_UNDEF)
+#define IS_VGIC_ADDR_UNDEF(_x) ((_x) == VGIC_ADDR_UNDEF)
#define VGIC_DIST_SIZE 0x1000
#define VGIC_CPU_SIZE 0x2000
+/* Physical address of vgic virtual cpu interface */
+static phys_addr_t vgic_vcpu_base;
+
+/* Virtual control interface base address */
+static void __iomem *vgic_vctrl_base;
+
+static struct device_node *vgic_node;
#define ACCESS_READ_VALUE (1 << 0)
#define ACCESS_READ_RAZ (0 << 0)
@@ -527,7 +541,7 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exi
if (!irqchip_in_kernel(vcpu->kvm) ||
mmio->phys_addr < base ||
- (mmio->phys_addr + mmio->len) > (base + dist->vgic_dist_size))
+ (mmio->phys_addr + mmio->len) > (base + VGIC_DIST_SIZE))
return false;
range = find_matching_range(vgic_ranges, mmio, base);
@@ -957,6 +971,225 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
return 0;
}
+static irqreturn_t vgic_maintenance_handler(int irq, void *data)
+{
+ struct kvm_vcpu *vcpu = *(struct kvm_vcpu **)data;
+ struct vgic_dist *dist;
+ struct vgic_cpu *vgic_cpu;
+
+ if (WARN(!vcpu,
+ "VGIC interrupt on CPU %d with no vcpu\n", smp_processor_id()))
+ return IRQ_HANDLED;
+
+ vgic_cpu = &vcpu->arch.vgic_cpu;
+ dist = &vcpu->kvm->arch.vgic;
+ kvm_debug("MISR = %08x\n", vgic_cpu->vgic_misr);
+
+ /*
+ * We do not need to take the distributor lock here, since the only
+ * action we perform is clearing the irq_active_bit for an EOIed
+ * level interrupt. There is a potential race with
+ * the queuing of an interrupt in __kvm_sync_to_cpu(), where we check
+ * if the interrupt is already active. Two possibilities:
+ *
+ * - The queuing is occuring on the same vcpu: cannot happen, as we're
+ * already in the context of this vcpu, and executing the handler
+ * - The interrupt has been migrated to another vcpu, and we ignore
+ * this interrupt for this run. Big deal. It is still pending though,
+ * and will get considered when this vcpu exits.
+ */
+ if (vgic_cpu->vgic_misr & VGIC_MISR_EOI) {
+ /*
+ * Some level interrupts have been EOIed. Clear their
+ * active bit.
+ */
+ int lr, irq;
+
+ for_each_set_bit(lr, (unsigned long *)vgic_cpu->vgic_eisr,
+ vgic_cpu->nr_lr) {
+ irq = vgic_cpu->vgic_lr[lr] & VGIC_LR_VIRTUALID;
+
+ vgic_bitmap_set_irq_val(&dist->irq_active,
+ vcpu->vcpu_id, irq, 0);
+ vgic_cpu->vgic_lr[lr] &= ~VGIC_LR_EOI;
+ writel_relaxed(vgic_cpu->vgic_lr[lr],
+ dist->vctrl_base + GICH_LR0 + (lr << 2));
+
+ /* Any additionnal pending interrupt? */
+ if (vgic_bitmap_get_irq_val(&dist->irq_state,
+ vcpu->vcpu_id, irq)) {
+ set_bit(irq, vcpu->arch.vgic_cpu.pending);
+ set_bit(vcpu->vcpu_id,
+ &dist->irq_pending_on_cpu);
+ } else {
+ clear_bit(irq, vgic_cpu->pending);
+ }
+ }
+ }
+
+ if (vgic_cpu->vgic_misr & VGIC_MISR_U) {
+ vgic_cpu->vgic_hcr &= ~VGIC_HCR_UIE;
+ writel_relaxed(vgic_cpu->vgic_hcr, dist->vctrl_base + GICH_HCR);
+ }
+
+ return IRQ_HANDLED;
+}
+
+void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
+{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ u32 reg;
+ int i;
+
+ if (!irqchip_in_kernel(vcpu->kvm))
+ return;
+
+ for (i = 0; i < VGIC_NR_IRQS; i++) {
+ if (i < 16)
+ vgic_bitmap_set_irq_val(&dist->irq_enabled,
+ vcpu->vcpu_id, i, 1);
+ if (i < 32)
+ vgic_bitmap_set_irq_val(&dist->irq_cfg,
+ vcpu->vcpu_id, i, 1);
+
+ vgic_cpu->vgic_irq_lr_map[i] = LR_EMPTY;
+ }
+
+ BUG_ON(!vcpu->kvm->arch.vgic.vctrl_base);
+ reg = readl_relaxed(vcpu->kvm->arch.vgic.vctrl_base + GICH_VTR);
+ vgic_cpu->nr_lr = (reg & 0x1f) + 1;
+
+ reg = readl_relaxed(vcpu->kvm->arch.vgic.vctrl_base + GICH_VMCR);
+ vgic_cpu->vgic_vmcr = reg | (0x1f << 27); /* Priority */
+
+ vgic_cpu->vgic_hcr |= VGIC_HCR_EN; /* Get the show on the road... */
+}
+
+static void vgic_init_maintenance_interrupt(void *info)
+{
+ unsigned int *irqp = info;
+
+ enable_percpu_irq(*irqp, 0);
+}
+
+int kvm_vgic_hyp_init(void)
+{
+ int ret;
+ unsigned int irq;
+ struct resource vctrl_res;
+ struct resource vcpu_res;
+
+ vgic_node = of_find_compatible_node(NULL, NULL, "arm,cortex-a15-gic");
+ if (!vgic_node)
+ return -ENODEV;
+
+ irq = irq_of_parse_and_map(vgic_node, 0);
+ if (!irq)
+ return -ENXIO;
+
+ ret = request_percpu_irq(irq, vgic_maintenance_handler,
+ "vgic", kvm_get_running_vcpus());
+ if (ret) {
+ kvm_err("Cannot register interrupt %d\n", irq);
+ return ret;
+ }
+
+ ret = of_address_to_resource(vgic_node, 2, &vctrl_res);
+ if (ret) {
+ kvm_err("Cannot obtain VCTRL resource\n");
+ goto out_free_irq;
+ }
+
+ vgic_vctrl_base = of_iomap(vgic_node, 2);
+ if (!vgic_vctrl_base) {
+ kvm_err("Cannot ioremap VCTRL\n");
+ ret = -ENOMEM;
+ goto out_free_irq;
+ }
+
+ ret = create_hyp_io_mappings(vgic_vctrl_base,
+ vgic_vctrl_base + resource_size(&vctrl_res),
+ vctrl_res.start);
+ if (ret) {
+ kvm_err("Cannot map VCTRL into hyp\n");
+ goto out_unmap;
+ }
+
+ kvm_info("%s@%llx IRQ%d\n", vgic_node->name, vctrl_res.start, irq);
+ on_each_cpu(vgic_init_maintenance_interrupt, &irq, 1);
+
+ if (of_address_to_resource(vgic_node, 3, &vcpu_res)) {
+ kvm_err("Cannot obtain VCPU resource\n");
+ ret = -ENXIO;
+ goto out_unmap;
+ }
+ vgic_vcpu_base = vcpu_res.start;
+
+ return 0;
+
+out_unmap:
+ iounmap(vgic_vctrl_base);
+out_free_irq:
+ free_percpu_irq(irq, kvm_get_running_vcpus());
+
+ return ret;
+}
+
+int kvm_vgic_init(struct kvm *kvm)
+{
+ int ret = 0, i;
+
+ mutex_lock(&kvm->lock);
+
+ if (vgic_initialized(kvm))
+ goto out;
+
+ if (IS_VGIC_ADDR_UNDEF(kvm->arch.vgic.vgic_dist_base) ||
+ IS_VGIC_ADDR_UNDEF(kvm->arch.vgic.vgic_cpu_base)) {
+ kvm_err("Need to set vgic cpu and dist addresses first\n");
+ ret = -ENXIO;
+ goto out;
+ }
+
+ ret = kvm_phys_addr_ioremap(kvm, kvm->arch.vgic.vgic_cpu_base,
+ vgic_vcpu_base, VGIC_CPU_SIZE);
+ if (ret) {
+ kvm_err("Unable to remap VGIC CPU to VCPU\n");
+ goto out;
+ }
+
+ for (i = 32; i < VGIC_NR_IRQS; i += 4)
+ vgic_set_target_reg(kvm, 0, i);
+
+ kvm->arch.vgic.ready = true;
+out:
+ mutex_unlock(&kvm->lock);
+ return ret;
+}
+
+int kvm_vgic_create(struct kvm *kvm)
+{
+ int ret;
+
+ mutex_lock(&kvm->lock);
+
+ if (atomic_read(&kvm->online_vcpus) || kvm->arch.vgic.vctrl_base) {
+ ret = -EEXIST;
+ goto out;
+ }
+
+ spin_lock_init(&kvm->arch.vgic.lock);
+ kvm->arch.vgic.vctrl_base = vgic_vctrl_base;
+ kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
+ kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
+
+ ret = 0;
+out:
+ mutex_unlock(&kvm->lock);
+ return ret;
+}
+
static bool vgic_ioaddr_overlap(struct kvm *kvm)
{
phys_addr_t dist = kvm->arch.vgic.vgic_dist_base;
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 12/13] ARM: KVM: vgic: reduce the number of vcpu kick
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (10 preceding siblings ...)
2012-10-22 6:52 ` [PATCH v3 11/13] ARM: KVM: VGIC initialisation code Christoffer Dall
@ 2012-10-22 6:52 ` Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 13/13] ARM: KVM: Add VGIC configuration option Christoffer Dall
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:52 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
If we have level interrupts already programmed to fire on a vcpu,
there is no reason to kick it after injecting a new interrupt,
as we're guaranteed that we'll exit when the level interrupt will
be EOId (VGIC_LR_EOI is set).
The exit will force a reload of the VGIC, injecting the new interrupts.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 10 ++++++++++
arch/arm/kvm/arm.c | 10 +++++++++-
arch/arm/kvm/vgic.c | 10 ++++++++--
3 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index 1287f75..447ec7a 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -215,6 +215,9 @@ struct vgic_cpu {
u32 vgic_elrsr[2]; /* Saved only */
u32 vgic_apr;
u32 vgic_lr[64]; /* A15 has only 4... */
+
+ /* Number of level-triggered interrupt in progress */
+ atomic_t irq_active_count;
#endif
};
@@ -254,6 +257,8 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
#define irqchip_in_kernel(k) (!!((k)->arch.vgic.vctrl_base))
#define vgic_initialized(k) ((k)->arch.vgic.ready)
+#define vgic_active_irq(v) (atomic_read(&(v)->arch.vgic_cpu.irq_active_count) == 0)
+
#else
static inline int kvm_vgic_hyp_init(void)
{
@@ -305,6 +310,11 @@ static inline bool kvm_vgic_initialized(struct kvm *kvm)
{
return true;
}
+
+static inline int vgic_active_irq(struct kvm_vcpu *vcpu)
+{
+ return 0;
+}
#endif
#endif
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index d367831..cab9cb7 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -94,7 +94,15 @@ int kvm_arch_hardware_enable(void *garbage)
int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
{
- return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
+ if (kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE) {
+ if (vgic_active_irq(vcpu) &&
+ cmpxchg(&vcpu->mode, EXITING_GUEST_MODE, IN_GUEST_MODE) == EXITING_GUEST_MODE)
+ return 0;
+
+ return 1;
+ }
+
+ return 0;
}
void kvm_arch_hardware_disable(void *garbage)
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index 415ddb8..146de1d 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -705,8 +705,10 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
kvm_debug("LR%d piggyback for IRQ%d %x\n", lr, irq, vgic_cpu->vgic_lr[lr]);
BUG_ON(!test_bit(lr, vgic_cpu->lr_used));
vgic_cpu->vgic_lr[lr] |= VGIC_LR_PENDING_BIT;
- if (is_level)
+ if (is_level) {
vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI;
+ atomic_inc(&vgic_cpu->irq_active_count);
+ }
return true;
}
@@ -718,8 +720,10 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
kvm_debug("LR%d allocated for IRQ%d %x\n", lr, irq, sgi_source_id);
vgic_cpu->vgic_lr[lr] = MK_LR_PEND(sgi_source_id, irq);
- if (is_level)
+ if (is_level) {
vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI;
+ atomic_inc(&vgic_cpu->irq_active_count);
+ }
vgic_cpu->vgic_irq_lr_map[irq] = lr;
clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
@@ -1011,6 +1015,8 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
vgic_bitmap_set_irq_val(&dist->irq_active,
vcpu->vcpu_id, irq, 0);
+ atomic_dec(&vgic_cpu->irq_active_count);
+ smp_mb();
vgic_cpu->vgic_lr[lr] &= ~VGIC_LR_EOI;
writel_relaxed(vgic_cpu->vgic_lr[lr],
dist->vctrl_base + GICH_LR0 + (lr << 2));
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 13/13] ARM: KVM: Add VGIC configuration option
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
` (11 preceding siblings ...)
2012-10-22 6:52 ` [PATCH v3 12/13] ARM: KVM: vgic: reduce the number of vcpu kick Christoffer Dall
@ 2012-10-22 6:52 ` Christoffer Dall
12 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-10-22 6:52 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
It is now possible to select the VGIC configuration option.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/kvm/Kconfig | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
index 47c5500..867551e 100644
--- a/arch/arm/kvm/Kconfig
+++ b/arch/arm/kvm/Kconfig
@@ -40,6 +40,13 @@ config KVM_ARM_HOST
---help---
Provides host support for ARM processors.
+config KVM_ARM_VGIC
+ bool "KVM support for Virtual GIC"
+ depends on KVM_ARM_HOST && OF
+ select HAVE_KVM_IRQCHIP
+ ---help---
+ Adds support for a hardware assisted, in-kernel GIC emulation.
+
source drivers/virtio/Kconfig
endif # VIRTUALIZATION
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [kvmarm] [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl
2012-10-22 6:51 ` [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl Christoffer Dall
@ 2012-11-09 13:45 ` Peter Maydell
2012-11-09 13:51 ` Benjamin Herrenschmidt
0 siblings, 1 reply; 17+ messages in thread
From: Peter Maydell @ 2012-11-09 13:45 UTC (permalink / raw)
To: linux-arm-kernel
On 22 October 2012 08:51, Christoffer Dall
<c.dall@virtualopensystems.com> wrote:
> +struct kvm_device_address {
> + __u32 id;
> + __u64 addr;
> +};
Ben suggested that this should either be a 64 bit id or have explicit
padding. Other than that I think that our current proposed ABI for
ARM irqchips is in line with the discussion we just had at KVM Forum
for handling non-x86 in-kernel irqchips [hopefully somebody will
write the details of that up...]
Ben, do I have that right?
thanks
-- PMM
^ permalink raw reply [flat|nested] 17+ messages in thread
* [kvmarm] [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl
2012-11-09 13:45 ` [kvmarm] " Peter Maydell
@ 2012-11-09 13:51 ` Benjamin Herrenschmidt
2012-11-10 8:36 ` Christoffer Dall
0 siblings, 1 reply; 17+ messages in thread
From: Benjamin Herrenschmidt @ 2012-11-09 13:51 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, 2012-11-09 at 14:45 +0100, Peter Maydell wrote:
> On 22 October 2012 08:51, Christoffer Dall
> <c.dall@virtualopensystems.com> wrote:
> > +struct kvm_device_address {
> > + __u32 id;
> > + __u64 addr;
> > +};
>
> Ben suggested that this should either be a 64 bit id or have explicit
> padding. Other than that I think that our current proposed ABI for
> ARM irqchips is in line with the discussion we just had at KVM Forum
> for handling non-x86 in-kernel irqchips [hopefully somebody will
> write the details of that up...]
>
> Ben, do I have that right?
I believe it is :-)
Cheers,
Ben.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [kvmarm] [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl
2012-11-09 13:51 ` Benjamin Herrenschmidt
@ 2012-11-10 8:36 ` Christoffer Dall
0 siblings, 0 replies; 17+ messages in thread
From: Christoffer Dall @ 2012-11-10 8:36 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Nov 9, 2012 at 2:51 PM, Benjamin Herrenschmidt
<benh@kernel.crashing.org> wrote:
> On Fri, 2012-11-09 at 14:45 +0100, Peter Maydell wrote:
>> On 22 October 2012 08:51, Christoffer Dall
>> <c.dall@virtualopensystems.com> wrote:
>> > +struct kvm_device_address {
>> > + __u32 id;
>> > + __u64 addr;
>> > +};
>>
>> Ben suggested that this should either be a 64 bit id or have explicit
>> padding. Other than that I think that our current proposed ABI for
>> ARM irqchips is in line with the discussion we just had at KVM Forum
>> for handling non-x86 in-kernel irqchips [hopefully somebody will
>> write the details of that up...]
>>
>> Ben, do I have that right?
>
> I believe it is :-)
>
good, I'll rev the id to a 64 bit field for the next series (coming shortly)
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2012-11-10 8:36 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-22 6:51 [PATCH v3 00/13] KVM/ARM vGIC support Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 01/13] KVM: ARM: Introduce KVM_SET_DEVICE_ADDRESS ioctl Christoffer Dall
2012-11-09 13:45 ` [kvmarm] " Peter Maydell
2012-11-09 13:51 ` Benjamin Herrenschmidt
2012-11-10 8:36 ` Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 02/13] ARM: KVM: Keep track of currently running vcpus Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 03/13] ARM: KVM: Initial VGIC infrastructure support Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 04/13] ARM: KVM: Initial VGIC MMIO support code Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 05/13] ARM: KVM: VGIC accept vcpu and dist base addresses from user space Christoffer Dall
2012-10-22 6:51 ` [PATCH v3 06/13] ARM: KVM: VGIC distributor handling Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 07/13] ARM: KVM: VGIC virtual CPU interface management Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 08/13] ARM: KVM: vgic: retire queued, disabled interrupts Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 09/13] ARM: KVM: VGIC interrupt injection Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 10/13] ARM: KVM: VGIC control interface world switch Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 11/13] ARM: KVM: VGIC initialisation code Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 12/13] ARM: KVM: vgic: reduce the number of vcpu kick Christoffer Dall
2012-10-22 6:52 ` [PATCH v3 13/13] ARM: KVM: Add VGIC configuration option Christoffer Dall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).