* [PATCH v2 00/10] KVM/ARM Implementation
@ 2012-10-01 9:07 Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 01/10] ARM: KVM: Keep track of currently running vcpus Christoffer Dall
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: Christoffer Dall @ 2012-10-01 9:07 UTC (permalink / raw)
To: linux-arm-kernel
The following series implements KVM support for ARM processors,
specifically on the Cortex A-15 platform. We feel this is ready to be
merged.
Work is done in collaboration between Columbia University, Virtual Open
Systems and ARM/Linaro.
The patch series applies to Linux 3.6 with a number of merges:
1. git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
branch: hyp-mode-boot-next (e5a04cb0b4a)
2. git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
branch: timers-next (437814c44c)
3. git://git.kernel.org/pub/scm/virt/kvm/kvm.git
branch: next (1e08ec4a)
This is Version 12 of the patch series, the first 10 versions were
reviewed on the KVM/ARM and KVM mailing lists. Changes can also be
pulled from:
git://github.com/virtualopensystems/linux-kvm-arm.git
branch: kvm-arm-v12
branch: kvm-arm-v12-vgic
branch: kvm-arm-v12-vgic-timers
A non-flattened edition of the patch series, which can always be merged,
can be found at:
git://github.com/virtualopensystems/linux-kvm-arm.git kvm-arm-master
This patch series requires QEMU compatibility. Use the branch
git://github.com/virtualopensystems/qemu.git kvm-arm
Following this patch series, which implements core KVM support are two
other patch series implementing Virtual Generic Interrupt Controller
(VGIC) support and Architected Generic Timers. All three patch series
should be applied for full QEMU compatibility.
The implementation is broken up into a logical set of patches, the first
are preparatory patches:
1. ARM: Add page table defines for KVM
3. ARM: Section based HYP idmaps
3. ARM: Factor out cpuid implementor and part_number fields
The main implementation is broken up into separate patches, the first
containing a skeleton of files, makefile changes, the basic user space
interface and KVM architecture specific stubs. Subsequent patches
implement parts of the system as listed:
4. Skeleton and reset hooks
5. Hypervisor initialization
6. Memory virtualization setup (hyp mode mappings and 2nd stage)
7. Inject IRQs and FIQs from userspace
8. World-switch implementation and Hyp exception vectors
9. Emulation framework and coproc emulation
10. Coproc user space API
11. Demux multiplexed coproc registers
12. User spac API to get/set VFP registers
13. Handle guest user memory aborts
14. Handle guest MMIO aborts
Testing:
Tested on FAST Models and Versatile Express test-chip2. Tested by
running three simultaenous VMs, all running SMP, on an SMP host, each
VM running hackbench and cyclictest and with extreme memory pressure
applied to the host with swapping enabled to provoke page eviction.
Also tested KSM merging and GCC inside VMs. Fully boots both Ubuntu
(user space Thumb-2) and Debian (user space ARM) guests.
For a guide on how to set up a testing environment and try out these
patches, see:
http://www.virtualopensystems.com/media/pdf/kvm-arm-guide.pdf
Changes since v11:
- Memory setup and page table defines reworked
- We do not export unused perf bitfields anymore
- No module support anymore and following cleanup
- Hide vcpu register accessors
- Fix unmap range mmu notifier race condition
- Factored out A15 coprocs in separate file
- Factored out world-switch assembly macros to separate file
- Add dmux of multiplexed coprocs to user space
- Add VFP get/set interface to user space
- Addressed various cleanup comments from reviewers
Changes since v10:
- Boot in Hyp mode and user HVC to initialize HVBAR
- Support VGIC
- Support Arch timers
- Support Thumb-2 mmio instruction decoding
- Transition to GET_ONE/SET_ONE register API
- Added KVM_VCPU_GET_REG_LIST
- New interrupt injection API
- Don't pin guest pages anymore
- Fix race condition in page fault handler
- Cleanup guest instruction copying.
- Fix race when copying SMP guest instructions
- Inject data/prefetch aborts when guest does something strange
Changes since v9:
- Addressed reviewer comments (see mailing list archive)
- Limit the user of .arch_extensiion sec/virt for compilers that need them
- VFP/Neon Support (Antonios Motakis)
- Run exit handling under preemption and still handle guest cache ops
- Add support for IO mapping at Hyp level (VGIC prep)
- Add support for IO mapping at Guest level (VGIC prep)
- Remove backdoor call to irq_svc
- Complete rework of CP15 handling and register reset (Rusty Russell)
- Don't use HSTR for anything else than CR 15
- New ioctl to set emulation target core (only A15 supported for now)
- Support KVM_GET_MSRS / KVM_SET_MSRS
- Add page accounting and page table eviction
- Change pgd lock to spinlock and fix sleeping in atomic bugs
- Check kvm_condition_valid for HVC traps of undefs
- Added a naive implementation of kvm_unmap_hva_range
Changes since v8:
- Support cache maintenance on SMP through set/way
- Hyp mode idmaps are now section based and happen at kernel init
- Handle aborts in Hyp mode
- Inject undefined exceptions into the guest on error
- Kernel-side reset of all crucial registers
- Specifically state which target CPU is being virtualized
- Exit statistics in debugfs
- Some L2CTLR cp15 emulation cleanups
- Support spte_hva for MMU notifiers and take write faults
- FIX: Race condition in VMID generation
- BUG: Run exit handling code with disabled preemption
- Save/Restore abort fault register during world switch
Changes since v7:
- Traps accesses to ACTLR
- Do not trap WFE execution
- Upgrade barriers and TLB operations to inner-shareable domain
- Restrucure hyp_pgd related code to be more opaque
- Random SMP fixes
- Random BUG fixes
- Improve commenting
- Support module loading/unloading of KVM/ARM
- Thumb-2 support for host kernel and KVM
- Unaligned cross-page wide guest Thumb instruction fetching
- Support ITSTATE fields in CPSR for Thumb guests
- Document HCR settings
Changes since v6:
- Support for MMU notifiers to not pin user pages in memory
- Suport build with log debugging
- Bugfix: v6 clobbered r7 in init code
- Simplify hyp code mapping
- Cleanup of register access code
- Table-based CP15 emulation from Rusty Russell
- Various other bug fixes and cleanups
Changes since v5:
- General bugfixes and nit fixes from reviews
- Implemented re-use of VMIDs
- Cleaned up the Hyp-mapping code to be readable by non-mm hackers
(including myself)
- Integrated preliminary SMP support in base patches
- Lock-less interrupt injection and WFI support
- Fixed signal-handling in while in guest (increases overall stability)
Changes since v4:
- Addressed reviewer comments from v4
* cleanup debug and trace code
* remove printks
* fixup kvm_arch_vcpu_ioctl_run
* add trace details to mmio emulation
- Fix from Marc Zyngier: Move kvm_guest_enter/exit into non-preemptible
section (squashed into world-switch patch)
- Cleanup create_hyp_mappings/remove_hyp_mappings from Marc Zyngier
(squashed into hypervisor initialization patch)
- Removed the remove_hyp_mappings feature. Removing hypervisor mappings
could potentially unmap other important data shared in the same page.
- Removed the arm_ prefix from the arch-specific files.
- Initial SMP host/guest support
Changes since v3:
- v4 actually works, fully boots a guest
- Support compiling as a module
- Use static inlines instead of macros for vcpu_reg and friends
- Optimize kvm_vcpu_reg function
- Use Ftrace for trace capabilities
- Updated documentation and commenting
- Use KVM_IRQ_LINE instead of KVM_INTERRUPT
- Emulates load/store instructions not supported through HSR
syndrome information.
- Frees 2nd stage translation tables on VM teardown
- Handles IRQ/FIQ instructions
- Handles more CP15 accesses
- Support guest WFI calls
- Uses debugfs instead of /proc
- Support compiling in Thumb mode
Changes since v2:
- Performs world-switch code
- Maps guest memory using 2nd stage translation
- Emulates co-processor 15 instructions
- Forwards I/O faults to QEMU.
---
Marc Zyngier (10):
ARM: KVM: Keep track of currently running vcpus
ARM: KVM: Initial VGIC infrastructure support
ARM: KVM: Initial VGIC MMIO support code
ARM: KVM: VGIC distributor handling
ARM: KVM: VGIC virtual CPU interface management
ARM: KVM: VGIC interrupt injection
ARM: KVM: VGIC control interface world switch
ARM: KVM: VGIC initialisation code
ARM: KVM: vgic: reduce the number of vcpu kick
ARM: KVM: Add VGIC configuration option
arch/arm/include/asm/kvm_arm.h | 12
arch/arm/include/asm/kvm_host.h | 16 +
arch/arm/include/asm/kvm_vgic.h | 301 +++++++++++
arch/arm/kernel/asm-offsets.c | 12
arch/arm/kvm/Kconfig | 7
arch/arm/kvm/Makefile | 1
arch/arm/kvm/arm.c | 101 +++-
arch/arm/kvm/interrupts.S | 4
arch/arm/kvm/interrupts_head.S | 68 ++
arch/arm/kvm/mmu.c | 3
arch/arm/kvm/vgic.c | 1115 +++++++++++++++++++++++++++++++++++++++
virt/kvm/kvm_main.c | 5
12 files changed, 1640 insertions(+), 5 deletions(-)
create mode 100644 arch/arm/include/asm/kvm_vgic.h
create mode 100644 arch/arm/kvm/vgic.c
--
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 01/10] ARM: KVM: Keep track of currently running vcpus
2012-10-01 9:07 [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
@ 2012-10-01 9:07 ` Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 02/10] ARM: KVM: Initial VGIC infrastructure support Christoffer Dall
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2012-10-01 9:07 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
When an interrupt occurs for the guest, it is sometimes necessary
to find out which vcpu was running at that point.
Keep track of which vcpu is being tun in kvm_arch_vcpu_ioctl_run(),
and allow the data to be retrived using either:
- kvm_arm_get_running_vcpu(): returns the vcpu running at this point
on the current CPU. Can only be used in a non-preemptable context.
- kvm_arm_get_running_vcpus(): returns the per-CPU variable holding
the the running vcpus, useable for per-CPU interrupts.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_host.h | 9 +++++++++
arch/arm/kvm/arm.c | 30 ++++++++++++++++++++++++++++++
2 files changed, 39 insertions(+)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index e4b5352..69a8680 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -151,4 +151,13 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
{
return 0;
}
+
+struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
+struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
+
+int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
+unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
+struct kvm_one_reg;
+int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
+int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
#endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 50e9585..8764dd0 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -53,11 +53,38 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
static struct vfp_hard_struct __percpu *kvm_host_vfp_state;
static unsigned long hyp_default_vectors;
+/* Per-CPU variable containing the currently running vcpu. */
+static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu);
+
/* The VMID used in the VTTBR */
static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
static u8 kvm_next_vmid;
static DEFINE_SPINLOCK(kvm_vmid_lock);
+static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
+{
+ BUG_ON(preemptible());
+ __get_cpu_var(kvm_arm_running_vcpu) = vcpu;
+}
+
+/**
+ * kvm_arm_get_running_vcpu - get the vcpu running on the current CPU.
+ * Must be called from non-preemptible context
+ */
+struct kvm_vcpu *kvm_arm_get_running_vcpu(void)
+{
+ BUG_ON(preemptible());
+ return __get_cpu_var(kvm_arm_running_vcpu);
+}
+
+/**
+ * kvm_arm_get_running_vcpus - get the per-CPU array on currently running vcpus.
+ */
+struct kvm_vcpu __percpu **kvm_get_running_vcpus(void)
+{
+ return &kvm_arm_running_vcpu;
+}
+
int kvm_arch_hardware_enable(void *garbage)
{
return 0;
@@ -296,10 +323,13 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
flush_cache_all(); /* We'd really want v7_flush_dcache_all() */
}
+
+ kvm_arm_set_running_vcpu(vcpu);
}
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
+ kvm_arm_set_running_vcpu(NULL);
}
int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 02/10] ARM: KVM: Initial VGIC infrastructure support
2012-10-01 9:07 [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 01/10] ARM: KVM: Keep track of currently running vcpus Christoffer Dall
@ 2012-10-01 9:07 ` Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 03/10] ARM: KVM: Initial VGIC MMIO support code Christoffer Dall
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2012-10-01 9:07 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Wire the basic framework code for VGIC support. Nothing to enable
yet.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_host.h | 7 ++++
arch/arm/include/asm/kvm_vgic.h | 65 +++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/arm.c | 21 ++++++++++++-
arch/arm/kvm/interrupts.S | 4 ++
arch/arm/kvm/mmu.c | 3 ++
virt/kvm/kvm_main.c | 5 ++-
6 files changed, 102 insertions(+), 3 deletions(-)
create mode 100644 arch/arm/include/asm/kvm_vgic.h
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 69a8680..d65faea 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -22,6 +22,7 @@
#include <asm/kvm.h>
#include <asm/kvm_asm.h>
#include <asm/fpstate.h>
+#include <asm/kvm_vgic.h>
#define KVM_MAX_VCPUS NR_CPUS
#define KVM_MEMORY_SLOTS 32
@@ -52,6 +53,9 @@ struct kvm_arch {
/* VTTBR value associated with above pgd and vmid */
u64 vttbr;
+
+ /* Interrupt controller */
+ struct vgic_dist vgic;
};
#define KVM_NR_MEM_OBJS 40
@@ -87,6 +91,9 @@ struct kvm_vcpu_arch {
struct vfp_hard_struct vfp_guest;
struct vfp_hard_struct *vfp_host;
+ /* VGIC state */
+ struct vgic_cpu vgic_cpu;
+
/*
* Anything that is not used directly from assembly code goes
* here.
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
new file mode 100644
index 0000000..e1fd530
--- /dev/null
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __ASM_ARM_KVM_VGIC_H
+#define __ASM_ARM_KVM_VGIC_H
+
+struct vgic_dist {
+};
+
+struct vgic_cpu {
+};
+
+struct kvm;
+struct kvm_vcpu;
+struct kvm_run;
+struct kvm_exit_mmio;
+
+#ifndef CONFIG_KVM_ARM_VGIC
+static inline int kvm_vgic_hyp_init(void)
+{
+ return 0;
+}
+
+static inline int kvm_vgic_init(struct kvm *kvm)
+{
+ return 0;
+}
+
+static inline void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu) {}
+static inline void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu) {}
+static inline void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu) {}
+
+static inline int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
+{
+ return 0;
+}
+
+static inline bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_exit_mmio *mmio)
+{
+ return false;
+}
+
+static inline int irqchip_in_kernel(struct kvm *kvm)
+{
+ return 0;
+}
+#endif
+
+#endif
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 8764dd0..cf13340 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -183,6 +183,9 @@ int kvm_dev_ioctl_check_extension(long ext)
{
int r;
switch (ext) {
+#ifdef CONFIG_KVM_ARM_VGIC
+ case KVM_CAP_IRQCHIP:
+#endif
case KVM_CAP_USER_MEMORY:
case KVM_CAP_DESTROY_MEMORY_REGION_WORKS:
case KVM_CAP_ONE_REG:
@@ -301,6 +304,10 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
{
/* Force users to call KVM_ARM_VCPU_INIT */
vcpu->arch.target = -1;
+
+ /* Set up VGIC */
+ kvm_vgic_vcpu_init(vcpu);
+
return 0;
}
@@ -360,7 +367,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
*/
int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
{
- return !!v->arch.irq_lines;
+ return !!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v);
}
int kvm_arch_vcpu_in_guest_mode(struct kvm_vcpu *v)
@@ -629,6 +636,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
update_vttbr(vcpu->kvm);
+ kvm_vgic_sync_to_cpu(vcpu);
+
local_irq_disable();
/*
@@ -641,6 +650,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
local_irq_enable();
+ kvm_vgic_sync_from_cpu(vcpu);
continue;
}
@@ -679,6 +689,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
* Back from guest
*************************************************************/
+ kvm_vgic_sync_from_cpu(vcpu);
+
ret = handle_exit(vcpu, run, ret);
}
@@ -942,6 +954,13 @@ static int init_hyp_mode(void)
}
}
+ /*
+ * Init HYP view of VGIC
+ */
+ err = kvm_vgic_hyp_init();
+ if (err)
+ goto out_free_mappings;
+
return 0;
out_free_vfp:
free_percpu(kvm_host_vfp_state);
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 90347d2..914c7f2 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -104,6 +104,8 @@ ENTRY(__kvm_vcpu_run)
store_mode_state sp, irq
store_mode_state sp, fiq
+ restore_vgic_state r0
+
@ Store hardware CP15 state and load guest state
read_cp15_state
write_cp15_state 1, r0
@@ -221,6 +223,8 @@ after_vfp_restore:
read_cp15_state 1, r1
write_cp15_state
+ save_vgic_state r1
+
load_mode_state sp, fiq
load_mode_state sp, irq
load_mode_state sp, und
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 0ab6ea3..5394a52 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -830,6 +830,9 @@ static int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
if (mmio.is_write)
memcpy(mmio.data, vcpu_reg(vcpu, rd), mmio.len);
+ if (vgic_handle_mmio(vcpu, run, &mmio))
+ return 1;
+
kvm_prepare_mmio(run, &mmio);
return 0;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c353b45..e7b0c68 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1883,12 +1883,13 @@ static long kvm_vcpu_ioctl(struct file *filp,
if (vcpu->kvm->mm != current->mm)
return -EIO;
-#if defined(CONFIG_S390) || defined(CONFIG_PPC)
+#if defined(CONFIG_S390) || defined(CONFIG_PPC) || defined(CONFIG_ARM)
/*
* Special cases: vcpu ioctls that are asynchronous to vcpu execution,
* so vcpu_load() would break it.
*/
- if (ioctl == KVM_S390_INTERRUPT || ioctl == KVM_INTERRUPT)
+ if (ioctl == KVM_S390_INTERRUPT || ioctl == KVM_INTERRUPT ||
+ ioctl == KVM_IRQ_LINE)
return kvm_arch_vcpu_ioctl(filp, ioctl, arg);
#endif
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 03/10] ARM: KVM: Initial VGIC MMIO support code
2012-10-01 9:07 [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 01/10] ARM: KVM: Keep track of currently running vcpus Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 02/10] ARM: KVM: Initial VGIC infrastructure support Christoffer Dall
@ 2012-10-01 9:07 ` Christoffer Dall
2012-10-01 9:08 ` [PATCH v2 04/10] ARM: KVM: VGIC distributor handling Christoffer Dall
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2012-10-01 9:07 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Wire the initial in-kernel MMIO support code for the VGIC, used
for the distributor emulation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 6 +-
arch/arm/kvm/Makefile | 1
arch/arm/kvm/vgic.c | 138 +++++++++++++++++++++++++++++++++++++++
3 files changed, 144 insertions(+), 1 deletion(-)
create mode 100644 arch/arm/kvm/vgic.c
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index e1fd530..a87ec6c 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -30,7 +30,11 @@ struct kvm_vcpu;
struct kvm_run;
struct kvm_exit_mmio;
-#ifndef CONFIG_KVM_ARM_VGIC
+#ifdef CONFIG_KVM_ARM_VGIC
+bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_exit_mmio *mmio);
+
+#else
static inline int kvm_vgic_hyp_init(void)
{
return 0;
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index ea5b282..89608c0 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -20,3 +20,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesc
obj-$(CONFIG_KVM_ARM_HOST) += arm.o guest.o mmu.o emulate.o reset.o
obj-$(CONFIG_KVM_ARM_HOST) += coproc.o coproc_a15.o
+obj-$(CONFIG_KVM_ARM_VGIC) += vgic.o
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
new file mode 100644
index 0000000..26ada3b
--- /dev/null
+++ b/arch/arm/kvm/vgic.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <asm/kvm_emulate.h>
+
+#define ACCESS_READ_VALUE (1 << 0)
+#define ACCESS_READ_RAZ (0 << 0)
+#define ACCESS_READ_MASK(x) ((x) & (1 << 0))
+#define ACCESS_WRITE_IGNORED (0 << 1)
+#define ACCESS_WRITE_SETBIT (1 << 1)
+#define ACCESS_WRITE_CLEARBIT (2 << 1)
+#define ACCESS_WRITE_VALUE (3 << 1)
+#define ACCESS_WRITE_MASK(x) ((x) & (3 << 1))
+
+/**
+ * vgic_reg_access - access vgic register
+ * @mmio: pointer to the data describing the mmio access
+ * @reg: pointer to the virtual backing of the vgic distributor struct
+ * @offset: least significant 2 bits used for word offset
+ * @mode: ACCESS_ mode (see defines above)
+ *
+ * Helper to make vgic register access easier using one of the access
+ * modes defined for vgic register access
+ * (read,raz,write-ignored,setbit,clearbit,write)
+ */
+static void vgic_reg_access(struct kvm_exit_mmio *mmio, u32 *reg,
+ u32 offset, int mode)
+{
+ int word_offset = offset & 3;
+ int shift = word_offset * 8;
+ u32 mask;
+ u32 regval;
+
+ /*
+ * Any alignment fault should have been delivered to the guest
+ * directly (ARM ARM B3.12.7 "Prioritization of aborts").
+ */
+
+ mask = (~0U) >> (word_offset * 8);
+ if (reg)
+ regval = *reg;
+ else {
+ BUG_ON(mode != (ACCESS_READ_RAZ | ACCESS_WRITE_IGNORED));
+ regval = 0;
+ }
+
+ if (mmio->is_write) {
+ u32 data = (*((u32 *)mmio->data) & mask) << shift;
+ switch (ACCESS_WRITE_MASK(mode)) {
+ case ACCESS_WRITE_IGNORED:
+ return;
+
+ case ACCESS_WRITE_SETBIT:
+ regval |= data;
+ break;
+
+ case ACCESS_WRITE_CLEARBIT:
+ regval &= ~data;
+ break;
+
+ case ACCESS_WRITE_VALUE:
+ regval = (regval & ~(mask << shift)) | data;
+ break;
+ }
+ *reg = regval;
+ } else {
+ switch (ACCESS_READ_MASK(mode)) {
+ case ACCESS_READ_RAZ:
+ regval = 0;
+ /* fall through */
+
+ case ACCESS_READ_VALUE:
+ *((u32 *)mmio->data) = (regval >> shift) & mask;
+ }
+ }
+}
+
+/* All this should be handled by kvm_bus_io_*()... FIXME!!! */
+struct mmio_range {
+ unsigned long base;
+ unsigned long len;
+ bool (*handle_mmio)(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio,
+ u32 offset);
+};
+
+static const struct mmio_range vgic_ranges[] = {
+ {}
+};
+
+static const
+struct mmio_range *find_matching_range(const struct mmio_range *ranges,
+ struct kvm_exit_mmio *mmio,
+ unsigned long base)
+{
+ const struct mmio_range *r = ranges;
+ unsigned long addr = mmio->phys_addr - base;
+
+ while (r->len) {
+ if (addr >= r->base &&
+ (addr + mmio->len) <= (r->base + r->len))
+ return r;
+ r++;
+ }
+
+ return NULL;
+}
+
+/**
+ * vgic_handle_mmio - handle an in-kernel MMIO access
+ * @vcpu: pointer to the vcpu performing the access
+ * @mmio: pointer to the data describing the access
+ *
+ * returns true if the MMIO access has been performed in kernel space,
+ * and false if it needs to be emulated in user space.
+ */
+bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exit_mmio *mmio)
+{
+ return KVM_EXIT_MMIO;
+}
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 04/10] ARM: KVM: VGIC distributor handling
2012-10-01 9:07 [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
` (2 preceding siblings ...)
2012-10-01 9:07 ` [PATCH v2 03/10] ARM: KVM: Initial VGIC MMIO support code Christoffer Dall
@ 2012-10-01 9:08 ` Christoffer Dall
2012-10-01 9:08 ` [PATCH v2 05/10] ARM: KVM: VGIC virtual CPU interface management Christoffer Dall
2012-10-01 9:09 ` [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
5 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2012-10-01 9:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Add the GIC distributor emulation code. A number of the GIC features
are simply ignored as they are not required to boot a Linux guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 170 ++++++++++++++
arch/arm/kvm/vgic.c | 475 +++++++++++++++++++++++++++++++++++++++
2 files changed, 644 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index a87ec6c..a82699f 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -19,7 +19,177 @@
#ifndef __ASM_ARM_KVM_VGIC_H
#define __ASM_ARM_KVM_VGIC_H
+#include <linux/kernel.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/irqreturn.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#define VGIC_NR_IRQS 128
+#define VGIC_NR_SHARED_IRQS (VGIC_NR_IRQS - 32)
+#define VGIC_MAX_CPUS NR_CPUS
+
+/* Sanity checks... */
+#if (VGIC_MAX_CPUS > 8)
+#error Invalid number of CPU interfaces
+#endif
+
+#if (VGIC_NR_IRQS & 31)
+#error "VGIC_NR_IRQS must be a multiple of 32"
+#endif
+
+#if (VGIC_NR_IRQS > 1024)
+#error "VGIC_NR_IRQS must be <= 1024"
+#endif
+
+/*
+ * The GIC distributor registers describing interrupts have two parts:
+ * - 32 per-CPU interrupts (SGI + PPI)
+ * - a bunch of shared interrups (SPI)
+ */
+struct vgic_bitmap {
+ union {
+ u32 reg[1];
+ unsigned long reg_ul[0];
+ } percpu[VGIC_MAX_CPUS];
+ union {
+ u32 reg[VGIC_NR_SHARED_IRQS / 32];
+ unsigned long reg_ul[0];
+ } shared;
+};
+
+static inline u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
+ int cpuid, u32 offset)
+{
+ offset >>= 2;
+ BUG_ON(offset > (VGIC_NR_IRQS / 32));
+ if (!offset)
+ return x->percpu[cpuid].reg;
+ else
+ return x->shared.reg + offset - 1;
+}
+
+static inline int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
+ int cpuid, int irq)
+{
+ if (irq < 32)
+ return test_bit(irq, x->percpu[cpuid].reg_ul);
+
+ return test_bit(irq - 32, x->shared.reg_ul);
+}
+
+static inline void vgic_bitmap_set_irq_val(struct vgic_bitmap *x,
+ int cpuid, int irq, int val)
+{
+ unsigned long *reg;
+
+ if (irq < 32)
+ reg = x->percpu[cpuid].reg_ul;
+ else {
+ reg = x->shared.reg_ul;
+ irq -= 32;
+ }
+
+ if (val)
+ set_bit(irq, reg);
+ else
+ clear_bit(irq, reg);
+}
+
+static inline unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x,
+ int cpuid)
+{
+ if (unlikely(cpuid >= VGIC_MAX_CPUS))
+ return NULL;
+ return x->percpu[cpuid].reg_ul;
+}
+
+static inline unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
+{
+ return x->shared.reg_ul;
+}
+
+struct vgic_bytemap {
+ union {
+ u32 reg[8];
+ unsigned long reg_ul[0];
+ } percpu[VGIC_MAX_CPUS];
+ union {
+ u32 reg[VGIC_NR_SHARED_IRQS / 4];
+ unsigned long reg_ul[0];
+ } shared;
+};
+
+static inline u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x,
+ int cpuid, u32 offset)
+{
+ offset >>= 2;
+ BUG_ON(offset > (VGIC_NR_IRQS / 4));
+ if (offset < 4)
+ return x->percpu[cpuid].reg + offset;
+ else
+ return x->shared.reg + offset - 8;
+}
+
+static inline int vgic_bytemap_get_irq_val(struct vgic_bytemap *x,
+ int cpuid, int irq)
+{
+ u32 *reg, shift;
+ shift = (irq & 3) * 8;
+ reg = vgic_bytemap_get_reg(x, cpuid, irq);
+ return (*reg >> shift) & 0xff;
+}
+
+static inline void vgic_bytemap_set_irq_val(struct vgic_bytemap *x,
+ int cpuid, int irq, int val)
+{
+ u32 *reg, shift;
+ shift = (irq & 3) * 8;
+ reg = vgic_bytemap_get_reg(x, cpuid, irq);
+ *reg &= ~(0xff << shift);
+ *reg |= (val & 0xff) << shift;
+}
+
struct vgic_dist {
+#ifdef CONFIG_KVM_ARM_VGIC
+ spinlock_t lock;
+
+ /* Virtual control interface mapping */
+ void __iomem *vctrl_base;
+
+ /* Distributor mapping in the guest */
+ unsigned long vgic_dist_base;
+ unsigned long vgic_dist_size;
+
+ /* Distributor enabled */
+ u32 enabled;
+
+ /* Interrupt enabled (one bit per IRQ) */
+ struct vgic_bitmap irq_enabled;
+
+ /* Interrupt 'pin' level */
+ struct vgic_bitmap irq_state;
+
+ /* Level-triggered interrupt in progress */
+ struct vgic_bitmap irq_active;
+
+ /* Interrupt priority. Not used yet. */
+ struct vgic_bytemap irq_priority;
+
+ /* Level/edge triggered */
+ struct vgic_bitmap irq_cfg;
+
+ /* Source CPU per SGI and target CPU */
+ u8 irq_sgi_sources[VGIC_MAX_CPUS][16];
+
+ /* Target CPU for each IRQ */
+ u8 irq_spi_cpu[VGIC_NR_SHARED_IRQS];
+ struct vgic_bitmap irq_spi_target[VGIC_MAX_CPUS];
+
+ /* Bitmap indicating which CPU has something pending */
+ unsigned long irq_pending_on_cpu;
+#endif
};
struct vgic_cpu {
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index 26ada3b..a870596 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -22,6 +22,46 @@
#include <linux/io.h>
#include <asm/kvm_emulate.h>
+/*
+ * How the whole thing works (courtesy of Christoffer Dall):
+ *
+ * - At any time, the dist->irq_pending_on_cpu is the oracle that knows if
+ * something is pending
+ * - VGIC pending interrupts are stored on the vgic.irq_state vgic
+ * bitmap (this bitmap is updated by both user land ioctls and guest
+ * mmio ops) and indicate the 'wire' state.
+ * - Every time the bitmap changes, the irq_pending_on_cpu oracle is
+ * recalculated
+ * - To calculate the oracle, we need info for each cpu from
+ * compute_pending_for_cpu, which considers:
+ * - PPI: dist->irq_state & dist->irq_enable
+ * - SPI: dist->irq_state & dist->irq_enable & dist->irq_spi_target
+ * - irq_spi_target is a 'formatted' version of the GICD_ICFGR
+ * registers, stored on each vcpu. We only keep one bit of
+ * information per interrupt, making sure that only one vcpu can
+ * accept the interrupt.
+ * - The same is true when injecting an interrupt, except that we only
+ * consider a single interrupt at a time. The irq_spi_cpu array
+ * contains the target CPU for each SPI.
+ *
+ * The handling of level interrupts adds some extra complexity. We
+ * need to track when the interrupt has been EOIed, so we can sample
+ * the 'line' again. This is achieved as such:
+ *
+ * - When a level interrupt is moved onto a vcpu, the corresponding
+ * bit in irq_active is set. As long as this bit is set, the line
+ * will be ignored for further interrupts. The interrupt is injected
+ * into the vcpu with the VGIC_LR_EOI bit set (generate a
+ * maintenance interrupt on EOI).
+ * - When the interrupt is EOIed, the maintenance interrupt fires,
+ * and clears the corresponding bit in irq_active. This allow the
+ * interrupt line to be sampled again.
+ */
+
+/* Temporary hacks, need to be provided by userspace emulation */
+#define VGIC_DIST_BASE 0x2c001000
+#define VGIC_DIST_SIZE 0x1000
+
#define ACCESS_READ_VALUE (1 << 0)
#define ACCESS_READ_RAZ (0 << 0)
#define ACCESS_READ_MASK(x) ((x) & (1 << 0))
@@ -31,6 +71,14 @@
#define ACCESS_WRITE_VALUE (3 << 1)
#define ACCESS_WRITE_MASK(x) ((x) & (3 << 1))
+static void vgic_update_state(struct kvm *kvm);
+static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
+
+static inline int vgic_irq_is_edge(struct vgic_dist *dist, int irq)
+{
+ return vgic_bitmap_get_irq_val(&dist->irq_cfg, 0, irq);
+}
+
/**
* vgic_reg_access - access vgic register
* @mmio: pointer to the data describing the mmio access
@@ -94,6 +142,280 @@ static void vgic_reg_access(struct kvm_exit_mmio *mmio, u32 *reg,
}
}
+static bool handle_mmio_misc(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 reg;
+ u32 u32off = offset & 3;
+
+ switch (offset & ~3) {
+ case 0: /* CTLR */
+ reg = vcpu->kvm->arch.vgic.enabled;
+ vgic_reg_access(mmio, ®, u32off,
+ ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
+ if (mmio->is_write) {
+ vcpu->kvm->arch.vgic.enabled = reg & 1;
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+ break;
+
+ case 4: /* TYPER */
+ reg = (atomic_read(&vcpu->kvm->online_vcpus) - 1) << 5;
+ reg |= (VGIC_NR_IRQS >> 5) - 1;
+ vgic_reg_access(mmio, ®, u32off,
+ ACCESS_READ_VALUE | ACCESS_WRITE_IGNORED);
+ break;
+
+ case 8: /* IIDR */
+ reg = 0x4B00043B;
+ vgic_reg_access(mmio, ®, u32off,
+ ACCESS_READ_VALUE | ACCESS_WRITE_IGNORED);
+ break;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_raz_wi(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ vgic_reg_access(mmio, NULL, offset,
+ ACCESS_READ_RAZ | ACCESS_WRITE_IGNORED);
+ return false;
+}
+
+static bool handle_mmio_set_enable_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_enabled,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_SETBIT);
+ if (mmio->is_write) {
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_clear_enable_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_enabled,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_CLEARBIT);
+ if (mmio->is_write) {
+ if (offset < 4) /* Force SGI enabled */
+ *reg |= 0xffff;
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_set_pending_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_state,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_SETBIT);
+ if (mmio->is_write) {
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_clear_pending_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_state,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_CLEARBIT);
+ if (mmio->is_write) {
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static bool handle_mmio_priority_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 *reg = vgic_bytemap_get_reg(&vcpu->kvm->arch.vgic.irq_priority,
+ vcpu->vcpu_id, offset);
+ vgic_reg_access(mmio, reg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
+ return false;
+}
+
+static u32 vgic_get_target_reg(struct kvm *kvm, int irq)
+{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ struct kvm_vcpu *vcpu;
+ int i, c;
+ unsigned long *bmap;
+ u32 val = 0;
+
+ BUG_ON(irq & 3);
+ BUG_ON(irq < 32);
+
+ irq -= 32;
+
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ bmap = vgic_bitmap_get_shared_map(&dist->irq_spi_target[c]);
+ for (i = 0; i < 4; i++)
+ if (test_bit(irq + i, bmap))
+ val |= 1 << (c + i * 8);
+ }
+
+ return val;
+}
+
+static void vgic_set_target_reg(struct kvm *kvm, u32 val, int irq)
+{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ struct kvm_vcpu *vcpu;
+ int i, c;
+ unsigned long *bmap;
+ u32 target;
+
+ BUG_ON(irq & 3);
+ BUG_ON(irq < 32);
+
+ irq -= 32;
+
+ /*
+ * Pick the LSB in each byte. This ensures we target exactly
+ * one vcpu per IRQ. If the byte is null, assume we target
+ * CPU0.
+ */
+ for (i = 0; i < 4; i++) {
+ int shift = i * 8;
+ target = ffs((val >> shift) & 0xffU);
+ target = target ? (target - 1) : 0;
+ dist->irq_spi_cpu[irq + i] = target;
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ bmap = vgic_bitmap_get_shared_map(&dist->irq_spi_target[c]);
+ if (c == target)
+ set_bit(irq + i, bmap);
+ else
+ clear_bit(irq + i, bmap);
+ }
+ }
+}
+
+static bool handle_mmio_target_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 reg;
+
+ /* We treat the banked interrupts targets as read-only */
+ if (offset < 32) {
+ u32 roreg = 1 << vcpu->vcpu_id;
+ roreg |= roreg << 8;
+ roreg |= roreg << 16;
+
+ vgic_reg_access(mmio, &roreg, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_IGNORED);
+ return false;
+ }
+
+ reg = vgic_get_target_reg(vcpu->kvm, offset & ~3U);
+ vgic_reg_access(mmio, ®, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
+ if (mmio->is_write) {
+ vgic_set_target_reg(vcpu->kvm, reg, offset & ~3U);
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
+static u32 vgic_cfg_expand(u16 val)
+{
+ u32 res = 0;
+ int i;
+
+ for (i = 0; i < 16; i++)
+ res |= (val >> i) << (2 * i + 1);
+
+ return res;
+}
+
+static u16 vgic_cfg_compress(u32 val)
+{
+ u16 res = 0;
+ int i;
+
+ for (i = 0; i < 16; i++)
+ res |= (val >> (i * 2 + 1)) << i;
+
+ return res;
+}
+
+/*
+ * The distributor uses 2 bits per IRQ for the CFG register, but the
+ * LSB is always 0. As such, we only keep the upper bit, and use the
+ * two above functions to compress/expand the bits
+ */
+static bool handle_mmio_cfg_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 val;
+ u32 *reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_cfg,
+ vcpu->vcpu_id, offset >> 1);
+ if (offset & 2)
+ val = *reg >> 16;
+ else
+ val = *reg & 0xffff;
+
+ val = vgic_cfg_expand(val);
+ vgic_reg_access(mmio, &val, offset,
+ ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
+ if (mmio->is_write) {
+ if (offset < 4) {
+ *reg = ~0U; /* Force PPIs/SGIs to 1 */
+ return false;
+ }
+
+ val = vgic_cfg_compress(val);
+ if (offset & 2) {
+ *reg &= 0xffff;
+ *reg |= val << 16;
+ } else {
+ *reg &= 0xffff << 16;
+ *reg |= val;
+ }
+ }
+
+ return false;
+}
+
+static bool handle_mmio_sgi_reg(struct kvm_vcpu *vcpu,
+ struct kvm_exit_mmio *mmio, u32 offset)
+{
+ u32 reg;
+ vgic_reg_access(mmio, ®, offset,
+ ACCESS_READ_RAZ | ACCESS_WRITE_VALUE);
+ if (mmio->is_write) {
+ vgic_dispatch_sgi(vcpu, reg);
+ vgic_update_state(vcpu->kvm);
+ return true;
+ }
+
+ return false;
+}
+
/* All this should be handled by kvm_bus_io_*()... FIXME!!! */
struct mmio_range {
unsigned long base;
@@ -103,6 +425,66 @@ struct mmio_range {
};
static const struct mmio_range vgic_ranges[] = {
+ { /* CTRL, TYPER, IIDR */
+ .base = 0,
+ .len = 12,
+ .handle_mmio = handle_mmio_misc,
+ },
+ { /* IGROUPRn */
+ .base = 0x80,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_raz_wi,
+ },
+ { /* ISENABLERn */
+ .base = 0x100,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_set_enable_reg,
+ },
+ { /* ICENABLERn */
+ .base = 0x180,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_clear_enable_reg,
+ },
+ { /* ISPENDRn */
+ .base = 0x200,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_set_pending_reg,
+ },
+ { /* ICPENDRn */
+ .base = 0x280,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_clear_pending_reg,
+ },
+ { /* ISACTIVERn */
+ .base = 0x300,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_raz_wi,
+ },
+ { /* ICACTIVERn */
+ .base = 0x380,
+ .len = VGIC_NR_IRQS / 8,
+ .handle_mmio = handle_mmio_raz_wi,
+ },
+ { /* IPRIORITYRn */
+ .base = 0x400,
+ .len = VGIC_NR_IRQS,
+ .handle_mmio = handle_mmio_priority_reg,
+ },
+ { /* ITARGETSRn */
+ .base = 0x800,
+ .len = VGIC_NR_IRQS,
+ .handle_mmio = handle_mmio_target_reg,
+ },
+ { /* ICFGRn */
+ .base = 0xC00,
+ .len = VGIC_NR_IRQS / 4,
+ .handle_mmio = handle_mmio_cfg_reg,
+ },
+ { /* SGIRn */
+ .base = 0xF00,
+ .len = 4,
+ .handle_mmio = handle_mmio_sgi_reg,
+ },
{}
};
@@ -134,5 +516,96 @@ struct mmio_range *find_matching_range(const struct mmio_range *ranges,
*/
bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exit_mmio *mmio)
{
- return KVM_EXIT_MMIO;
+ const struct mmio_range *range;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ unsigned long base = dist->vgic_dist_base;
+ bool updated_state;
+
+ if (!irqchip_in_kernel(vcpu->kvm) ||
+ mmio->phys_addr < base ||
+ (mmio->phys_addr + mmio->len) > (base + dist->vgic_dist_size))
+ return false;
+
+ range = find_matching_range(vgic_ranges, mmio, base);
+ if (unlikely(!range || !range->handle_mmio)) {
+ pr_warn("Unhandled access %d %08llx %d\n",
+ mmio->is_write, mmio->phys_addr, mmio->len);
+ return false;
+ }
+
+ spin_lock(&vcpu->kvm->arch.vgic.lock);
+ updated_state = range->handle_mmio(vcpu, mmio,mmio->phys_addr - range->base - base);
+ spin_unlock(&vcpu->kvm->arch.vgic.lock);
+ kvm_prepare_mmio(run, mmio);
+ kvm_handle_mmio_return(vcpu, run);
+
+ return true;
+}
+
+static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ int nrcpus = atomic_read(&kvm->online_vcpus);
+ u8 target_cpus;
+ int sgi, mode, c, vcpu_id;
+
+ vcpu_id = vcpu->vcpu_id;
+
+ sgi = reg & 0xf;
+ target_cpus = (reg >> 16) & 0xff;
+ mode = (reg >> 24) & 3;
+
+ switch (mode) {
+ case 0:
+ if (!target_cpus)
+ return;
+
+ case 1:
+ target_cpus = ((1 << nrcpus) - 1) & ~(1 << vcpu_id) & 0xff;
+ break;
+
+ case 2:
+ target_cpus = 1 << vcpu_id;
+ break;
+ }
+
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ if (target_cpus & 1) {
+ /* Flag the SGI as pending */
+ vgic_bitmap_set_irq_val(&dist->irq_state, c, sgi, 1);
+ dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
+ kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
+ }
+
+ target_cpus >>= 1;
+ }
+}
+
+static int compute_pending_for_cpu(struct kvm_vcpu *vcpu)
+{
+ return 0;
+}
+
+/*
+ * Update the interrupt state and determine which CPUs have pending
+ * interrupts. Must be called with distributor lock held.
+ */
+static void vgic_update_state(struct kvm *kvm)
+{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+ struct kvm_vcpu *vcpu;
+ int c;
+
+ if (!dist->enabled) {
+ set_bit(0, &dist->irq_pending_on_cpu);
+ return;
+ }
+
+ kvm_for_each_vcpu(c, vcpu, kvm) {
+ if (compute_pending_for_cpu(vcpu)) {
+ pr_debug("CPU%d has pending interrupts\n", c);
+ set_bit(c, &dist->irq_pending_on_cpu);
+ }
+ }
}
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 05/10] ARM: KVM: VGIC virtual CPU interface management
2012-10-01 9:07 [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
` (3 preceding siblings ...)
2012-10-01 9:08 ` [PATCH v2 04/10] ARM: KVM: VGIC distributor handling Christoffer Dall
@ 2012-10-01 9:08 ` Christoffer Dall
2012-10-01 9:09 ` [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
5 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2012-10-01 9:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Add VGIC virtual CPU interface code, picking pending interrupts
from the distributor and stashing them in the VGIC control interface
list registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 41 +++++++
arch/arm/kvm/vgic.c | 224 +++++++++++++++++++++++++++++++++++++++
2 files changed, 264 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index a82699f..bb67076 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -193,17 +193,58 @@ struct vgic_dist {
};
struct vgic_cpu {
+#ifdef CONFIG_KVM_ARM_VGIC
+ /* per IRQ to LR mapping */
+ u8 vgic_irq_lr_map[VGIC_NR_IRQS];
+
+ /* Pending interrupts on this VCPU */
+ DECLARE_BITMAP( pending, VGIC_NR_IRQS);
+
+ /* Bitmap of used/free list registers */
+ DECLARE_BITMAP( lr_used, 64);
+
+ /* Number of list registers on this CPU */
+ int nr_lr;
+
+ /* CPU vif control registers for world switch */
+ u32 vgic_hcr;
+ u32 vgic_vmcr;
+ u32 vgic_misr; /* Saved only */
+ u32 vgic_eisr[2]; /* Saved only */
+ u32 vgic_elrsr[2]; /* Saved only */
+ u32 vgic_apr;
+ u32 vgic_lr[64]; /* A15 has only 4... */
+#endif
};
+#define VGIC_HCR_EN (1 << 0)
+#define VGIC_HCR_UIE (1 << 1)
+
+#define VGIC_LR_VIRTUALID (0x3ff << 0)
+#define VGIC_LR_PHYSID_CPUID (7 << 10)
+#define VGIC_LR_STATE (3 << 28)
+#define VGIC_LR_PENDING_BIT (1 << 28)
+#define VGIC_LR_ACTIVE_BIT (1 << 29)
+#define VGIC_LR_EOI (1 << 19)
+
+#define VGIC_MISR_EOI (1 << 0)
+#define VGIC_MISR_U (1 << 1)
+
+#define LR_EMPTY 0xff
+
struct kvm;
struct kvm_vcpu;
struct kvm_run;
struct kvm_exit_mmio;
#ifdef CONFIG_KVM_ARM_VGIC
+void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu);
+void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu);
+int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
struct kvm_exit_mmio *mmio);
+#define irqchip_in_kernel(k) (!!((k)->arch.vgic.vctrl_base))
#else
static inline int kvm_vgic_hyp_init(void)
{
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index a870596..2b90785 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -584,7 +584,25 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
static int compute_pending_for_cpu(struct kvm_vcpu *vcpu)
{
- return 0;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ unsigned long *pending, *enabled, *pend;
+ int vcpu_id;
+
+ vcpu_id = vcpu->vcpu_id;
+ pend = vcpu->arch.vgic_cpu.pending;
+
+ pending = vgic_bitmap_get_cpu_map(&dist->irq_state, vcpu_id);
+ enabled = vgic_bitmap_get_cpu_map(&dist->irq_enabled, vcpu_id);
+ bitmap_and(pend, pending, enabled, 32);
+
+ pending = vgic_bitmap_get_shared_map(&dist->irq_state);
+ enabled = vgic_bitmap_get_shared_map(&dist->irq_enabled);
+ bitmap_and(pend + 1, pending, enabled, VGIC_NR_SHARED_IRQS);
+ bitmap_and(pend + 1, pend + 1,
+ vgic_bitmap_get_shared_map(&dist->irq_spi_target[vcpu_id]),
+ VGIC_NR_SHARED_IRQS);
+
+ return (find_first_bit(pend, VGIC_NR_IRQS) < VGIC_NR_IRQS);
}
/*
@@ -609,3 +627,207 @@ static void vgic_update_state(struct kvm *kvm)
}
}
}
+
+#define LR_PHYSID(lr) (((lr) & VGIC_LR_PHYSID_CPUID) >> 10)
+#define MK_LR_PEND(src, irq) (VGIC_LR_PENDING_BIT | ((src) << 10) | (irq))
+/*
+ * Queue an interrupt to a CPU virtual interface. Return true on success,
+ * or false if it wasn't possible to queue it.
+ */
+static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
+{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ int lr, is_level;
+
+ /* Sanitize the input... */
+ BUG_ON(sgi_source_id & ~7);
+ BUG_ON(sgi_source_id && irq > 15);
+ BUG_ON(irq >= VGIC_NR_IRQS);
+
+ kvm_debug("Queue IRQ%d\n", irq);
+
+ lr = vgic_cpu->vgic_irq_lr_map[irq];
+ is_level = !vgic_irq_is_edge(dist, irq);
+
+ /* Do we have an active interrupt for the same CPUID? */
+ if (lr != LR_EMPTY &&
+ (LR_PHYSID(vgic_cpu->vgic_lr[lr]) == sgi_source_id)) {
+ kvm_debug("LR%d piggyback for IRQ%d %x\n", lr, irq, vgic_cpu->vgic_lr[lr]);
+ BUG_ON(!test_bit(lr, vgic_cpu->lr_used));
+ vgic_cpu->vgic_lr[lr] |= VGIC_LR_PENDING_BIT;
+ if (is_level)
+ vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI;
+ return true;
+ }
+
+ /* Try to use another LR for this interrupt */
+ lr = find_first_bit((unsigned long *)vgic_cpu->vgic_elrsr,
+ vgic_cpu->nr_lr);
+ if (lr >= vgic_cpu->nr_lr)
+ return false;
+
+ kvm_debug("LR%d allocated for IRQ%d %x\n", lr, irq, sgi_source_id);
+ vgic_cpu->vgic_lr[lr] = MK_LR_PEND(sgi_source_id, irq);
+ if (is_level)
+ vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI;
+
+ vgic_cpu->vgic_irq_lr_map[irq] = lr;
+ clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
+ set_bit(lr, vgic_cpu->lr_used);
+
+ return true;
+}
+
+/*
+ * Fill the list registers with pending interrupts before running the
+ * guest.
+ */
+static void __kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu)
+{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ unsigned long *pending;
+ int i, c, vcpu_id;
+ int overflow = 0;
+
+ vcpu_id = vcpu->vcpu_id;
+
+ /*
+ * We may not have any pending interrupt, or the interrupts
+ * may have been serviced from another vcpu. In all cases,
+ * move along.
+ */
+ if (!kvm_vgic_vcpu_pending_irq(vcpu)) {
+ pr_debug("CPU%d has no pending interrupt\n", vcpu_id);
+ goto epilog;
+ }
+
+ /* SGIs */
+ pending = vgic_bitmap_get_cpu_map(&dist->irq_state, vcpu_id);
+ for_each_set_bit(i, vgic_cpu->pending, 16) {
+ unsigned long sources;
+
+ sources = dist->irq_sgi_sources[vcpu_id][i];
+ for_each_set_bit(c, &sources, 8) {
+ if (!vgic_queue_irq(vcpu, c, i)) {
+ overflow = 1;
+ continue;
+ }
+
+ clear_bit(c, &sources);
+ }
+
+ if (!sources)
+ clear_bit(i, pending);
+
+ dist->irq_sgi_sources[vcpu_id][i] = sources;
+ }
+
+ /* PPIs */
+ for_each_set_bit_from(i, vgic_cpu->pending, 32) {
+ if (!vgic_queue_irq(vcpu, 0, i)) {
+ overflow = 1;
+ continue;
+ }
+
+ clear_bit(i, pending);
+ }
+
+
+ /* SPIs */
+ pending = vgic_bitmap_get_shared_map(&dist->irq_state);
+ for_each_set_bit_from(i, vgic_cpu->pending, VGIC_NR_IRQS) {
+ if (vgic_bitmap_get_irq_val(&dist->irq_active, 0, i))
+ continue; /* level interrupt, already queued */
+
+ if (!vgic_queue_irq(vcpu, 0, i)) {
+ overflow = 1;
+ continue;
+ }
+
+ /* Immediate clear on edge, set active on level */
+ if (vgic_irq_is_edge(dist, i))
+ clear_bit(i - 32, pending);
+ else
+ vgic_bitmap_set_irq_val(&dist->irq_active, 0, i, 1);
+ }
+
+epilog:
+ if (overflow)
+ vgic_cpu->vgic_hcr |= VGIC_HCR_UIE;
+ else {
+ vgic_cpu->vgic_hcr &= ~VGIC_HCR_UIE;
+ /*
+ * We're about to run this VCPU, and we've consumed
+ * everything the distributor had in store for
+ * us. Claim we don't have anything pending. We'll
+ * adjust that if needed while exiting.
+ */
+ clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
+ }
+}
+
+/*
+ * Sync back the VGIC state after a guest run. We do not really touch
+ * the distributor here (the irq_pending_on_cpu bit is safe to set),
+ * so there is no need for taking its lock.
+ */
+static void __kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu)
+{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+ int lr, pending;
+
+ /* Clear mappings for empty LRs */
+ for_each_set_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr,
+ vgic_cpu->nr_lr) {
+ int irq;
+
+ if (!test_and_clear_bit(lr, vgic_cpu->lr_used))
+ continue;
+
+ irq = vgic_cpu->vgic_lr[lr] & VGIC_LR_VIRTUALID;
+
+ BUG_ON(irq >= VGIC_NR_IRQS);
+ vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY;
+ }
+
+ /* Check if we still have something up our sleeve... */
+ pending = find_first_zero_bit((unsigned long *)vgic_cpu->vgic_elrsr,
+ vgic_cpu->nr_lr);
+ if (pending < vgic_cpu->nr_lr) {
+ set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
+ smp_mb();
+ }
+}
+
+void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu)
+{
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
+ if (!irqchip_in_kernel(vcpu->kvm))
+ return;
+
+ spin_lock(&dist->lock);
+ __kvm_vgic_sync_to_cpu(vcpu);
+ spin_unlock(&dist->lock);
+}
+
+void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu)
+{
+ if (!irqchip_in_kernel(vcpu->kvm))
+ return;
+
+ __kvm_vgic_sync_from_cpu(vcpu);
+}
+
+int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
+{
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
+ if (!irqchip_in_kernel(vcpu->kvm))
+ return 0;
+
+ return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
+}
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 00/10] KVM/ARM Implementation
2012-10-01 9:07 [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
` (4 preceding siblings ...)
2012-10-01 9:08 ` [PATCH v2 05/10] ARM: KVM: VGIC virtual CPU interface management Christoffer Dall
@ 2012-10-01 9:09 ` Christoffer Dall
5 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2012-10-01 9:09 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Oct 1, 2012 at 5:07 AM, Christoffer Dall
<c.dall@virtualopensystems.com> wrote:
> The following series implements KVM support for ARM processors,
> specifically on the Cortex A-15 platform. We feel this is ready to be
> merged.
>
> Work is done in collaboration between Columbia University, Virtual Open
> Systems and ARM/Linaro.
>
> The patch series applies to Linux 3.6 with a number of merges:
> 1. git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
> branch: hyp-mode-boot-next (e5a04cb0b4a)
> 2. git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
> branch: timers-next (437814c44c)
> 3. git://git.kernel.org/pub/scm/virt/kvm/kvm.git
> branch: next (1e08ec4a)
>
> This is Version 12 of the patch series, the first 10 versions were
> reviewed on the KVM/ARM and KVM mailing lists. Changes can also be
> pulled from:
> git://github.com/virtualopensystems/linux-kvm-arm.git
> branch: kvm-arm-v12
> branch: kvm-arm-v12-vgic
> branch: kvm-arm-v12-vgic-timers
>
> A non-flattened edition of the patch series, which can always be merged,
> can be found at:
> git://github.com/virtualopensystems/linux-kvm-arm.git kvm-arm-master
>
> This patch series requires QEMU compatibility. Use the branch
> git://github.com/virtualopensystems/qemu.git kvm-arm
>
> Following this patch series, which implements core KVM support are two
> other patch series implementing Virtual Generic Interrupt Controller
> (VGIC) support and Architected Generic Timers. All three patch series
> should be applied for full QEMU compatibility.
>
> The implementation is broken up into a logical set of patches, the first
> are preparatory patches:
> 1. ARM: Add page table defines for KVM
> 3. ARM: Section based HYP idmaps
> 3. ARM: Factor out cpuid implementor and part_number fields
>
> The main implementation is broken up into separate patches, the first
> containing a skeleton of files, makefile changes, the basic user space
> interface and KVM architecture specific stubs. Subsequent patches
> implement parts of the system as listed:
> 4. Skeleton and reset hooks
> 5. Hypervisor initialization
> 6. Memory virtualization setup (hyp mode mappings and 2nd stage)
> 7. Inject IRQs and FIQs from userspace
> 8. World-switch implementation and Hyp exception vectors
> 9. Emulation framework and coproc emulation
> 10. Coproc user space API
> 11. Demux multiplexed coproc registers
> 12. User spac API to get/set VFP registers
> 13. Handle guest user memory aborts
> 14. Handle guest MMIO aborts
>
> Testing:
> Tested on FAST Models and Versatile Express test-chip2. Tested by
> running three simultaenous VMs, all running SMP, on an SMP host, each
> VM running hackbench and cyclictest and with extreme memory pressure
> applied to the host with swapping enabled to provoke page eviction.
> Also tested KSM merging and GCC inside VMs. Fully boots both Ubuntu
> (user space Thumb-2) and Debian (user space ARM) guests.
>
> For a guide on how to set up a testing environment and try out these
> patches, see:
> http://www.virtualopensystems.com/media/pdf/kvm-arm-guide.pdf
>
> Changes since v11:
> - Memory setup and page table defines reworked
> - We do not export unused perf bitfields anymore
> - No module support anymore and following cleanup
> - Hide vcpu register accessors
> - Fix unmap range mmu notifier race condition
> - Factored out A15 coprocs in separate file
> - Factored out world-switch assembly macros to separate file
> - Add dmux of multiplexed coprocs to user space
> - Add VFP get/set interface to user space
> - Addressed various cleanup comments from reviewers
>
> Changes since v10:
> - Boot in Hyp mode and user HVC to initialize HVBAR
> - Support VGIC
> - Support Arch timers
> - Support Thumb-2 mmio instruction decoding
> - Transition to GET_ONE/SET_ONE register API
> - Added KVM_VCPU_GET_REG_LIST
> - New interrupt injection API
> - Don't pin guest pages anymore
> - Fix race condition in page fault handler
> - Cleanup guest instruction copying.
> - Fix race when copying SMP guest instructions
> - Inject data/prefetch aborts when guest does something strange
>
> Changes since v9:
> - Addressed reviewer comments (see mailing list archive)
> - Limit the user of .arch_extensiion sec/virt for compilers that need them
> - VFP/Neon Support (Antonios Motakis)
> - Run exit handling under preemption and still handle guest cache ops
> - Add support for IO mapping at Hyp level (VGIC prep)
> - Add support for IO mapping at Guest level (VGIC prep)
> - Remove backdoor call to irq_svc
> - Complete rework of CP15 handling and register reset (Rusty Russell)
> - Don't use HSTR for anything else than CR 15
> - New ioctl to set emulation target core (only A15 supported for now)
> - Support KVM_GET_MSRS / KVM_SET_MSRS
> - Add page accounting and page table eviction
> - Change pgd lock to spinlock and fix sleeping in atomic bugs
> - Check kvm_condition_valid for HVC traps of undefs
> - Added a naive implementation of kvm_unmap_hva_range
>
> Changes since v8:
> - Support cache maintenance on SMP through set/way
> - Hyp mode idmaps are now section based and happen at kernel init
> - Handle aborts in Hyp mode
> - Inject undefined exceptions into the guest on error
> - Kernel-side reset of all crucial registers
> - Specifically state which target CPU is being virtualized
> - Exit statistics in debugfs
> - Some L2CTLR cp15 emulation cleanups
> - Support spte_hva for MMU notifiers and take write faults
> - FIX: Race condition in VMID generation
> - BUG: Run exit handling code with disabled preemption
> - Save/Restore abort fault register during world switch
>
> Changes since v7:
> - Traps accesses to ACTLR
> - Do not trap WFE execution
> - Upgrade barriers and TLB operations to inner-shareable domain
> - Restrucure hyp_pgd related code to be more opaque
> - Random SMP fixes
> - Random BUG fixes
> - Improve commenting
> - Support module loading/unloading of KVM/ARM
> - Thumb-2 support for host kernel and KVM
> - Unaligned cross-page wide guest Thumb instruction fetching
> - Support ITSTATE fields in CPSR for Thumb guests
> - Document HCR settings
>
> Changes since v6:
> - Support for MMU notifiers to not pin user pages in memory
> - Suport build with log debugging
> - Bugfix: v6 clobbered r7 in init code
> - Simplify hyp code mapping
> - Cleanup of register access code
> - Table-based CP15 emulation from Rusty Russell
> - Various other bug fixes and cleanups
>
> Changes since v5:
> - General bugfixes and nit fixes from reviews
> - Implemented re-use of VMIDs
> - Cleaned up the Hyp-mapping code to be readable by non-mm hackers
> (including myself)
> - Integrated preliminary SMP support in base patches
> - Lock-less interrupt injection and WFI support
> - Fixed signal-handling in while in guest (increases overall stability)
>
> Changes since v4:
> - Addressed reviewer comments from v4
> * cleanup debug and trace code
> * remove printks
> * fixup kvm_arch_vcpu_ioctl_run
> * add trace details to mmio emulation
> - Fix from Marc Zyngier: Move kvm_guest_enter/exit into non-preemptible
> section (squashed into world-switch patch)
> - Cleanup create_hyp_mappings/remove_hyp_mappings from Marc Zyngier
> (squashed into hypervisor initialization patch)
> - Removed the remove_hyp_mappings feature. Removing hypervisor mappings
> could potentially unmap other important data shared in the same page.
> - Removed the arm_ prefix from the arch-specific files.
> - Initial SMP host/guest support
>
> Changes since v3:
> - v4 actually works, fully boots a guest
> - Support compiling as a module
> - Use static inlines instead of macros for vcpu_reg and friends
> - Optimize kvm_vcpu_reg function
> - Use Ftrace for trace capabilities
> - Updated documentation and commenting
> - Use KVM_IRQ_LINE instead of KVM_INTERRUPT
> - Emulates load/store instructions not supported through HSR
> syndrome information.
> - Frees 2nd stage translation tables on VM teardown
> - Handles IRQ/FIQ instructions
> - Handles more CP15 accesses
> - Support guest WFI calls
> - Uses debugfs instead of /proc
> - Support compiling in Thumb mode
>
> Changes since v2:
> - Performs world-switch code
> - Maps guest memory using 2nd stage translation
> - Emulates co-processor 15 instructions
> - Forwards I/O faults to QEMU.
>
> ---
>
> Marc Zyngier (10):
> ARM: KVM: Keep track of currently running vcpus
> ARM: KVM: Initial VGIC infrastructure support
> ARM: KVM: Initial VGIC MMIO support code
> ARM: KVM: VGIC distributor handling
> ARM: KVM: VGIC virtual CPU interface management
> ARM: KVM: VGIC interrupt injection
> ARM: KVM: VGIC control interface world switch
> ARM: KVM: VGIC initialisation code
> ARM: KVM: vgic: reduce the number of vcpu kick
> ARM: KVM: Add VGIC configuration option
>
>
> arch/arm/include/asm/kvm_arm.h | 12
> arch/arm/include/asm/kvm_host.h | 16 +
> arch/arm/include/asm/kvm_vgic.h | 301 +++++++++++
> arch/arm/kernel/asm-offsets.c | 12
> arch/arm/kvm/Kconfig | 7
> arch/arm/kvm/Makefile | 1
> arch/arm/kvm/arm.c | 101 +++-
> arch/arm/kvm/interrupts.S | 4
> arch/arm/kvm/interrupts_head.S | 68 ++
> arch/arm/kvm/mmu.c | 3
> arch/arm/kvm/vgic.c | 1115 +++++++++++++++++++++++++++++++++++++++
> virt/kvm/kvm_main.c | 5
> 12 files changed, 1640 insertions(+), 5 deletions(-)
> create mode 100644 arch/arm/include/asm/kvm_vgic.h
> create mode 100644 arch/arm/kvm/vgic.c
>
> --
Please disregard this first mail-out, I messed up my stgit config.
-Christoffer
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 03/10] ARM: KVM: Initial VGIC MMIO support code
2012-10-01 9:12 [PATCH v2 00/10] KVM/ARM vGIC support Christoffer Dall
@ 2012-10-01 9:13 ` Christoffer Dall
0 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2012-10-01 9:13 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Wire the initial in-kernel MMIO support code for the VGIC, used
for the distributor emulation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
arch/arm/include/asm/kvm_vgic.h | 6 +-
arch/arm/kvm/Makefile | 1
arch/arm/kvm/vgic.c | 138 +++++++++++++++++++++++++++++++++++++++
3 files changed, 144 insertions(+), 1 deletion(-)
create mode 100644 arch/arm/kvm/vgic.c
diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index e1fd530..a87ec6c 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -30,7 +30,11 @@ struct kvm_vcpu;
struct kvm_run;
struct kvm_exit_mmio;
-#ifndef CONFIG_KVM_ARM_VGIC
+#ifdef CONFIG_KVM_ARM_VGIC
+bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_exit_mmio *mmio);
+
+#else
static inline int kvm_vgic_hyp_init(void)
{
return 0;
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index ea5b282..89608c0 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -20,3 +20,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesc
obj-$(CONFIG_KVM_ARM_HOST) += arm.o guest.o mmu.o emulate.o reset.o
obj-$(CONFIG_KVM_ARM_HOST) += coproc.o coproc_a15.o
+obj-$(CONFIG_KVM_ARM_VGIC) += vgic.o
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
new file mode 100644
index 0000000..26ada3b
--- /dev/null
+++ b/arch/arm/kvm/vgic.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <asm/kvm_emulate.h>
+
+#define ACCESS_READ_VALUE (1 << 0)
+#define ACCESS_READ_RAZ (0 << 0)
+#define ACCESS_READ_MASK(x) ((x) & (1 << 0))
+#define ACCESS_WRITE_IGNORED (0 << 1)
+#define ACCESS_WRITE_SETBIT (1 << 1)
+#define ACCESS_WRITE_CLEARBIT (2 << 1)
+#define ACCESS_WRITE_VALUE (3 << 1)
+#define ACCESS_WRITE_MASK(x) ((x) & (3 << 1))
+
+/**
+ * vgic_reg_access - access vgic register
+ * @mmio: pointer to the data describing the mmio access
+ * @reg: pointer to the virtual backing of the vgic distributor struct
+ * @offset: least significant 2 bits used for word offset
+ * @mode: ACCESS_ mode (see defines above)
+ *
+ * Helper to make vgic register access easier using one of the access
+ * modes defined for vgic register access
+ * (read,raz,write-ignored,setbit,clearbit,write)
+ */
+static void vgic_reg_access(struct kvm_exit_mmio *mmio, u32 *reg,
+ u32 offset, int mode)
+{
+ int word_offset = offset & 3;
+ int shift = word_offset * 8;
+ u32 mask;
+ u32 regval;
+
+ /*
+ * Any alignment fault should have been delivered to the guest
+ * directly (ARM ARM B3.12.7 "Prioritization of aborts").
+ */
+
+ mask = (~0U) >> (word_offset * 8);
+ if (reg)
+ regval = *reg;
+ else {
+ BUG_ON(mode != (ACCESS_READ_RAZ | ACCESS_WRITE_IGNORED));
+ regval = 0;
+ }
+
+ if (mmio->is_write) {
+ u32 data = (*((u32 *)mmio->data) & mask) << shift;
+ switch (ACCESS_WRITE_MASK(mode)) {
+ case ACCESS_WRITE_IGNORED:
+ return;
+
+ case ACCESS_WRITE_SETBIT:
+ regval |= data;
+ break;
+
+ case ACCESS_WRITE_CLEARBIT:
+ regval &= ~data;
+ break;
+
+ case ACCESS_WRITE_VALUE:
+ regval = (regval & ~(mask << shift)) | data;
+ break;
+ }
+ *reg = regval;
+ } else {
+ switch (ACCESS_READ_MASK(mode)) {
+ case ACCESS_READ_RAZ:
+ regval = 0;
+ /* fall through */
+
+ case ACCESS_READ_VALUE:
+ *((u32 *)mmio->data) = (regval >> shift) & mask;
+ }
+ }
+}
+
+/* All this should be handled by kvm_bus_io_*()... FIXME!!! */
+struct mmio_range {
+ unsigned long base;
+ unsigned long len;
+ bool (*handle_mmio)(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio,
+ u32 offset);
+};
+
+static const struct mmio_range vgic_ranges[] = {
+ {}
+};
+
+static const
+struct mmio_range *find_matching_range(const struct mmio_range *ranges,
+ struct kvm_exit_mmio *mmio,
+ unsigned long base)
+{
+ const struct mmio_range *r = ranges;
+ unsigned long addr = mmio->phys_addr - base;
+
+ while (r->len) {
+ if (addr >= r->base &&
+ (addr + mmio->len) <= (r->base + r->len))
+ return r;
+ r++;
+ }
+
+ return NULL;
+}
+
+/**
+ * vgic_handle_mmio - handle an in-kernel MMIO access
+ * @vcpu: pointer to the vcpu performing the access
+ * @mmio: pointer to the data describing the access
+ *
+ * returns true if the MMIO access has been performed in kernel space,
+ * and false if it needs to be emulated in user space.
+ */
+bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exit_mmio *mmio)
+{
+ return KVM_EXIT_MMIO;
+}
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2012-10-01 9:13 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-01 9:07 [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 01/10] ARM: KVM: Keep track of currently running vcpus Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 02/10] ARM: KVM: Initial VGIC infrastructure support Christoffer Dall
2012-10-01 9:07 ` [PATCH v2 03/10] ARM: KVM: Initial VGIC MMIO support code Christoffer Dall
2012-10-01 9:08 ` [PATCH v2 04/10] ARM: KVM: VGIC distributor handling Christoffer Dall
2012-10-01 9:08 ` [PATCH v2 05/10] ARM: KVM: VGIC virtual CPU interface management Christoffer Dall
2012-10-01 9:09 ` [PATCH v2 00/10] KVM/ARM Implementation Christoffer Dall
-- strict thread matches above, loose matches on Subject: below --
2012-10-01 9:12 [PATCH v2 00/10] KVM/ARM vGIC support Christoffer Dall
2012-10-01 9:13 ` [PATCH v2 03/10] ARM: KVM: Initial VGIC MMIO support code Christoffer Dall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).