* [PULL 00/79] KVM/ARM Changes for v4.12
@ 2017-04-23 17:08 Christoffer Dall
2017-04-23 17:08 ` [PULL 01/79] arm64: sysreg: sort by encoding Christoffer Dall
` (79 more replies)
0 siblings, 80 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
Hi Paolo and Radim,
Here are the changes for KVM/ARM for v4.12 so far. I may send another
pull request next week with the ITS save/restore patches if we feel they
are ready. The ABI part of the ITS save/restore patches has matured for
a while so we just need to make sure on the implementation bits.
Note that this pull request shares a common base branch with the arm64
tree which has already been pulled by the arm64 folks into their
for-next/core branch.
As for these changes, they include:
- Using the common sysreg definitions between KVM and arm64
- Improved hyp-stub implementation with support for kexec and kdump on the 32-bit side
- Proper PMU exception handling
- Performance improvements of our GIC handling
- Support for irqchip in userspace with in-kernel arch-timers and PMU support
- A fix for a race condition in our PSCI code
The following changes since commit 97da3854c526d3a6ee05c849c96e48d21527606c:
Linux 4.11-rc3 (2017-03-19 19:09:39 -0700)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git kvm-arm-for-v4.12
for you to fetch changes up to 1edb632133efb6226b6bef3e7d9fa8c7134ac4e2:
ARM: KVM: Fix idmap stub entry when running Thumb-2 code (2017-04-20 20:17:57 +0200)
Thanks,
-Christoffer
---
Alexander Graf (2):
KVM: arm/arm64: Add ARM user space interrupt signaling ABI
KVM: arm/arm64: Support arch timers with a userspace gic
Andrew Jones (1):
KVM: arm/arm64: fix races in kvm_psci_vcpu_on
Christoffer Dall (12):
KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put
KVM: arm/arm64: vgic: Get rid of live_lrs
KVM: arm/arm64: vgic: Only set underflow when actually out of LRs
KVM: arm/arm64: vgic: Get rid of unnecessary process_maintenance
operation
KVM: arm/arm64: vgic: Get rid of unnecessary save_maint_int_state
KVM: arm/arm64: vgic: Get rid of MISR and EISR fields
KVM: arm/arm64: vgic: Implement early VGIC init functionality
KVM: arm/arm64: vgic: Don't check vgic_initialized in sync/flush
KVM: arm/arm64: vgic: Improve sync_hwstate performance
KVM: arm/arm64: Cleanup the arch timer code's irqchip checking
KVM: arm/arm64: Report PMU overflow interrupts to userspace irqchip
KVM: arm/arm64: Advertise support for KVM_CAP_ARM_USER_IRQ
Marc Zyngier (45):
arm64: KVM: PMU: Refactor pmu_*_el0_disabled
arm64: KVM: PMU: Inject UNDEF exception on illegal register access
arm64: KVM: PMU: Inject UNDEF on non-privileged accesses
arm64: KVM: Make unexpected reads from WO registers inject an undef
arm64: KVM: PMU: Inject UNDEF on read access to PMSWINC_EL0
arm64: KVM: Treat sysreg accessors returning false as successful
arm64: KVM: Do not corrupt registers on failed 64bit CP read
arm: KVM: Make unexpected register accesses inject an undef
arm: KVM: Treat CP15 accessors returning false as successful
arm64: hyp-stub: Stop pointlessly clobbering lr
arm64: KVM: Move lr save/restore to do_el2_call
arm64: hyp-stub: Don't save lr in the EL1 code
arm64: hyp-stub: Define a return value for failed stub calls
arm64: hyp-stub: Update documentation in asm/virt.h
arm64: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall
arm64: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init
code
arm64: KVM: Implement HVC_GET_VECTORS in the init code
arm64: KVM: Allow the main HYP code to use the init hyp stub
implementation
arm64: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors
arm64: KVM: Implement HVC_SOFT_RESTART in the init code
ARM: KVM: Convert KVM to use HVC_GET_VECTORS
ARM: Update cpu_v7_reset documentation
ARM: hyp-stub: Use r1 for the soft-restart address
ARM: Expose the VA/IDMAP offset
ARM: hyp-stub: Define a return value for failed stub calls
ARM: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall
ARM: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code
ARM: KVM: Implement HVC_GET_VECTORS in the init code
ARM: KVM: Allow the main HYP code to use the init hyp stub
implementation
ARM: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors
ARM: KVM: Implement HVC_SOFT_RESTART in the init code
ARM: KVM: Gracefully handle hyp-stubs being restored from under our
feet
arm/arm64: KVM: Use __hyp_reset_vectors() directly
arm/arm64: KVM: Remove kvm_get_idmap_start
arm/arm64: KVM: Use HVC_RESET_VECTORS to reinit HYP mode
ARM: decompressor: Remove __hyp_get_vectors usage
ARM: hyp-stub/KVM: Kill __hyp_get_vectors
arm64: hyp-stub/KVM: Kill __hyp_get_vectors
arm64: hyp-stub: Zero x0 on successful stub handling
ARM: hyp-stub: Zero r0 on successful stub handling
arm/arm64: Add hyp-stub API documentation
KVM: arm/arm64: vgic-v3: De-optimize VMCR save/restore when emulating
a GICv2
KVM: arm/arm64: vgic-v3: Fix off-by-one LR access
ARM: hyp-stub: Fix Thumb-2 compilation
ARM: KVM: Fix idmap stub entry when running Thumb-2 code
Mark Rutland (15):
arm64: sysreg: sort by encoding
arm64: sysreg: add debug system registers
arm64: sysreg: add performance monitor registers
arm64: sysreg: subsume GICv3 sysreg definitions
arm64: sysreg: add physical timer registers
arm64: sysreg: add register encodings used by KVM
arm64: sysreg: add Set/Way sys encodings
KVM: arm64: add SYS_DESC()
KVM: arm64: Use common debug sysreg definitions
KVM: arm64: Use common performance monitor sysreg definitions
KVM: arm64: Use common GICv3 sysreg definitions
KVM: arm64: Use common physical timer sysreg definitions
KVM: arm64: use common invariant sysreg definitions
KVM: arm64: Use common sysreg definitions
KVM: arm64: Use common Set/Way sys definitions
Russell King (2):
ARM: hyp-stub: improve ABI
ARM: soft-reboot into same mode that we entered the kernel
Shih-Wei Li (1):
KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no
pending IRQ
Suzuki K Poulose (1):
kvm: arm/arm64: Rework gpa callback handlers
Documentation/virtual/kvm/api.txt | 42 +++
Documentation/virtual/kvm/arm/hyp-abi.txt | 53 ++++
arch/arm/boot/compressed/head.S | 12 +-
arch/arm/include/asm/kvm_asm.h | 7 +-
arch/arm/include/asm/kvm_host.h | 6 -
arch/arm/include/asm/kvm_mmu.h | 1 -
arch/arm/include/asm/proc-fns.h | 4 +-
arch/arm/include/asm/virt.h | 14 +-
arch/arm/include/uapi/asm/kvm.h | 2 +
arch/arm/kernel/hyp-stub.S | 43 ++-
arch/arm/kernel/reboot.c | 7 +-
arch/arm/kvm/arm.c | 66 ++--
arch/arm/kvm/coproc.c | 24 +-
arch/arm/kvm/coproc.h | 18 --
arch/arm/kvm/handle_exit.c | 8 +
arch/arm/kvm/hyp/hyp-entry.S | 28 +-
arch/arm/kvm/init.S | 51 ++-
arch/arm/kvm/interrupts.S | 4 -
arch/arm/kvm/mmu.c | 36 +--
arch/arm/kvm/psci.c | 8 +-
arch/arm/mm/mmu.c | 5 +
arch/arm/mm/proc-v7.S | 15 +-
arch/arm64/include/asm/arch_gicv3.h | 81 +----
arch/arm64/include/asm/kvm_asm.h | 5 +-
arch/arm64/include/asm/kvm_host.h | 7 -
arch/arm64/include/asm/kvm_mmu.h | 1 -
arch/arm64/include/asm/sysreg.h | 162 +++++++++-
arch/arm64/include/asm/virt.h | 31 +-
arch/arm64/include/uapi/asm/kvm.h | 2 +
arch/arm64/kernel/head.S | 8 +-
arch/arm64/kernel/hyp-stub.S | 38 +--
arch/arm64/kvm/hyp-init.S | 46 ++-
arch/arm64/kvm/hyp.S | 5 +-
arch/arm64/kvm/hyp/hyp-entry.S | 43 ++-
arch/arm64/kvm/sys_regs.c | 496 +++++++++++-------------------
arch/arm64/kvm/sys_regs.h | 23 +-
arch/arm64/kvm/sys_regs_generic_v8.c | 4 +-
include/kvm/arm_arch_timer.h | 2 +
include/kvm/arm_pmu.h | 7 +
include/kvm/arm_vgic.h | 9 +-
include/uapi/linux/kvm.h | 8 +
virt/kvm/arm/arch_timer.c | 124 ++++++--
virt/kvm/arm/hyp/vgic-v2-sr.c | 78 +----
virt/kvm/arm/hyp/vgic-v3-sr.c | 87 ++----
virt/kvm/arm/pmu.c | 39 ++-
virt/kvm/arm/vgic/vgic-init.c | 108 ++++---
virt/kvm/arm/vgic/vgic-v2.c | 90 +++---
virt/kvm/arm/vgic/vgic-v3.c | 87 +++---
virt/kvm/arm/vgic/vgic.c | 60 +++-
virt/kvm/arm/vgic/vgic.h | 8 +-
50 files changed, 1175 insertions(+), 938 deletions(-)
create mode 100644 Documentation/virtual/kvm/arm/hyp-abi.txt
^ permalink raw reply [flat|nested] 81+ messages in thread
* [PULL 01/79] arm64: sysreg: sort by encoding
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 02/79] arm64: sysreg: add debug system registers Christoffer Dall
` (78 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Out sysreg definitions are largely (but not entirely) in ascending order
of op0:op1:CRn:CRm:op2.
It would be preferable to enforce this sort, as this makes it easier to
verify the set of encodings against documentation, and provides an
obvious location for each addition in future, minimising conflicts.
This patch enforces this order, by moving the few items that break it.
There should be no functional change.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/sysreg.h | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index ac24b6e..e6498ac 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -81,6 +81,14 @@
#endif /* CONFIG_BROKEN_GAS_INST */
+#define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4)
+#define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3)
+
+#define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM | \
+ (!!x)<<8 | 0x1f)
+#define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \
+ (!!x)<<8 | 0x1f)
+
#define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
#define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
#define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
@@ -118,17 +126,10 @@
#define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1)
#define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2)
-#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1)
#define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7)
-#define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4)
-#define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3)
-
-#define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM | \
- (!!x)<<8 | 0x1f)
-#define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \
- (!!x)<<8 | 0x1f)
+#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_EE (1 << 25)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 02/79] arm64: sysreg: add debug system registers
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
2017-04-23 17:08 ` [PULL 01/79] arm64: sysreg: sort by encoding Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 03/79] arm64: sysreg: add performance monitor registers Christoffer Dall
` (77 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
This patch adds sysreg definitions for system registers in the debug and
trace system register encoding space. Subsequent patches will make use
of these definitions.
The encodings were taken from ARM DDI 0487A.k_iss10775, Table C5-5.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/sysreg.h | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index e6498ac..b54f8a4 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -89,6 +89,29 @@
#define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \
(!!x)<<8 | 0x1f)
+#define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2)
+#define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0)
+#define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2)
+#define SYS_OSDTRTX_EL1 sys_reg(2, 0, 0, 3, 2)
+#define SYS_OSECCR_EL1 sys_reg(2, 0, 0, 6, 2)
+#define SYS_DBGBVRn_EL1(n) sys_reg(2, 0, 0, n, 4)
+#define SYS_DBGBCRn_EL1(n) sys_reg(2, 0, 0, n, 5)
+#define SYS_DBGWVRn_EL1(n) sys_reg(2, 0, 0, n, 6)
+#define SYS_DBGWCRn_EL1(n) sys_reg(2, 0, 0, n, 7)
+#define SYS_MDRAR_EL1 sys_reg(2, 0, 1, 0, 0)
+#define SYS_OSLAR_EL1 sys_reg(2, 0, 1, 0, 4)
+#define SYS_OSLSR_EL1 sys_reg(2, 0, 1, 1, 4)
+#define SYS_OSDLR_EL1 sys_reg(2, 0, 1, 3, 4)
+#define SYS_DBGPRCR_EL1 sys_reg(2, 0, 1, 4, 4)
+#define SYS_DBGCLAIMSET_EL1 sys_reg(2, 0, 7, 8, 6)
+#define SYS_DBGCLAIMCLR_EL1 sys_reg(2, 0, 7, 9, 6)
+#define SYS_DBGAUTHSTATUS_EL1 sys_reg(2, 0, 7, 14, 6)
+#define SYS_MDCCSR_EL0 sys_reg(2, 3, 0, 1, 0)
+#define SYS_DBGDTR_EL0 sys_reg(2, 3, 0, 4, 0)
+#define SYS_DBGDTRRX_EL0 sys_reg(2, 3, 0, 5, 0)
+#define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0)
+#define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0)
+
#define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
#define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
#define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 03/79] arm64: sysreg: add performance monitor registers
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
2017-04-23 17:08 ` [PULL 01/79] arm64: sysreg: sort by encoding Christoffer Dall
2017-04-23 17:08 ` [PULL 02/79] arm64: sysreg: add debug system registers Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 04/79] arm64: sysreg: subsume GICv3 sysreg definitions Christoffer Dall
` (76 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
This patch adds sysreg definitions for system registers which are part
of the performance monitors extension. Subsequent patches will make use
of these definitions.
The set of registers is described in ARM DDI 0487A.k_iss10775, Table
D5-9. The encodings were taken from Table C5-6 in the same document.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/sysreg.h | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b54f8a4..3498d02 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -149,11 +149,36 @@
#define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1)
#define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2)
+#define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1)
+#define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2)
+
#define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1)
#define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7)
+#define SYS_PMCR_EL0 sys_reg(3, 3, 9, 12, 0)
+#define SYS_PMCNTENSET_EL0 sys_reg(3, 3, 9, 12, 1)
+#define SYS_PMCNTENCLR_EL0 sys_reg(3, 3, 9, 12, 2)
+#define SYS_PMOVSCLR_EL0 sys_reg(3, 3, 9, 12, 3)
+#define SYS_PMSWINC_EL0 sys_reg(3, 3, 9, 12, 4)
+#define SYS_PMSELR_EL0 sys_reg(3, 3, 9, 12, 5)
+#define SYS_PMCEID0_EL0 sys_reg(3, 3, 9, 12, 6)
+#define SYS_PMCEID1_EL0 sys_reg(3, 3, 9, 12, 7)
+#define SYS_PMCCNTR_EL0 sys_reg(3, 3, 9, 13, 0)
+#define SYS_PMXEVTYPER_EL0 sys_reg(3, 3, 9, 13, 1)
+#define SYS_PMXEVCNTR_EL0 sys_reg(3, 3, 9, 13, 2)
+#define SYS_PMUSERENR_EL0 sys_reg(3, 3, 9, 14, 0)
+#define SYS_PMOVSSET_EL0 sys_reg(3, 3, 9, 14, 3)
+
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
+#define __PMEV_op2(n) ((n) & 0x7)
+#define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3))
+#define SYS_PMEVCNTRn_EL0(n) sys_reg(3, 3, 14, __CNTR_CRm(n), __PMEV_op2(n))
+#define __TYPER_CRm(n) (0xc | (((n) >> 3) & 0x3))
+#define SYS_PMEVTYPERn_EL0(n) sys_reg(3, 3, 14, __TYPER_CRm(n), __PMEV_op2(n))
+
+#define SYS_PMCCFILTR_EL0 sys_reg (3, 3, 14, 15, 7)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_EE (1 << 25)
#define SCTLR_ELx_I (1 << 12)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 04/79] arm64: sysreg: subsume GICv3 sysreg definitions
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (2 preceding siblings ...)
2017-04-23 17:08 ` [PULL 03/79] arm64: sysreg: add performance monitor registers Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 05/79] arm64: sysreg: add physical timer registers Christoffer Dall
` (75 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Unlike most sysreg defintiions, the GICv3 definitions don't have a SYS_
prefix, and they don't live in <asm/sysreg.h>. Additionally, some
definitions are duplicated elsewhere (e.g. in the KVM save/restore
code).
For consistency, and to make it possible to share a common definition
for these sysregs, this patch moves the definitions to <asm/sysreg.h>,
adding a SYS_ prefix, and sorting the registers per their encoding.
Existing users of the definitions are fixed up so that this change is
not problematic.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/arch_gicv3.h | 81 ++++++-------------------------------
arch/arm64/include/asm/sysreg.h | 52 ++++++++++++++++++++++++
arch/arm64/kernel/head.S | 8 ++--
3 files changed, 69 insertions(+), 72 deletions(-)
diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index f37e3a2..1a98bc8 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -20,69 +20,14 @@
#include <asm/sysreg.h>
-#define ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1)
-#define ICC_DIR_EL1 sys_reg(3, 0, 12, 11, 1)
-#define ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0)
-#define ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5)
-#define ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0)
-#define ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4)
-#define ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5)
-#define ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7)
-#define ICC_BPR1_EL1 sys_reg(3, 0, 12, 12, 3)
-
-#define ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5)
-
-/*
- * System register definitions
- */
-#define ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4)
-#define ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0)
-#define ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1)
-#define ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2)
-#define ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3)
-#define ICH_ELSR_EL2 sys_reg(3, 4, 12, 11, 5)
-#define ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7)
-
-#define __LR0_EL2(x) sys_reg(3, 4, 12, 12, x)
-#define __LR8_EL2(x) sys_reg(3, 4, 12, 13, x)
-
-#define ICH_LR0_EL2 __LR0_EL2(0)
-#define ICH_LR1_EL2 __LR0_EL2(1)
-#define ICH_LR2_EL2 __LR0_EL2(2)
-#define ICH_LR3_EL2 __LR0_EL2(3)
-#define ICH_LR4_EL2 __LR0_EL2(4)
-#define ICH_LR5_EL2 __LR0_EL2(5)
-#define ICH_LR6_EL2 __LR0_EL2(6)
-#define ICH_LR7_EL2 __LR0_EL2(7)
-#define ICH_LR8_EL2 __LR8_EL2(0)
-#define ICH_LR9_EL2 __LR8_EL2(1)
-#define ICH_LR10_EL2 __LR8_EL2(2)
-#define ICH_LR11_EL2 __LR8_EL2(3)
-#define ICH_LR12_EL2 __LR8_EL2(4)
-#define ICH_LR13_EL2 __LR8_EL2(5)
-#define ICH_LR14_EL2 __LR8_EL2(6)
-#define ICH_LR15_EL2 __LR8_EL2(7)
-
-#define __AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
-#define ICH_AP0R0_EL2 __AP0Rx_EL2(0)
-#define ICH_AP0R1_EL2 __AP0Rx_EL2(1)
-#define ICH_AP0R2_EL2 __AP0Rx_EL2(2)
-#define ICH_AP0R3_EL2 __AP0Rx_EL2(3)
-
-#define __AP1Rx_EL2(x) sys_reg(3, 4, 12, 9, x)
-#define ICH_AP1R0_EL2 __AP1Rx_EL2(0)
-#define ICH_AP1R1_EL2 __AP1Rx_EL2(1)
-#define ICH_AP1R2_EL2 __AP1Rx_EL2(2)
-#define ICH_AP1R3_EL2 __AP1Rx_EL2(3)
-
#ifndef __ASSEMBLY__
#include <linux/stringify.h>
#include <asm/barrier.h>
#include <asm/cacheflush.h>
-#define read_gicreg read_sysreg_s
-#define write_gicreg write_sysreg_s
+#define read_gicreg(r) read_sysreg_s(SYS_ ## r)
+#define write_gicreg(v, r) write_sysreg_s(v, SYS_ ## r)
/*
* Low-level accessors
@@ -93,13 +38,13 @@
static inline void gic_write_eoir(u32 irq)
{
- write_sysreg_s(irq, ICC_EOIR1_EL1);
+ write_sysreg_s(irq, SYS_ICC_EOIR1_EL1);
isb();
}
static inline void gic_write_dir(u32 irq)
{
- write_sysreg_s(irq, ICC_DIR_EL1);
+ write_sysreg_s(irq, SYS_ICC_DIR_EL1);
isb();
}
@@ -107,7 +52,7 @@ static inline u64 gic_read_iar_common(void)
{
u64 irqstat;
- irqstat = read_sysreg_s(ICC_IAR1_EL1);
+ irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1);
dsb(sy);
return irqstat;
}
@@ -124,7 +69,7 @@ static inline u64 gic_read_iar_cavium_thunderx(void)
u64 irqstat;
nops(8);
- irqstat = read_sysreg_s(ICC_IAR1_EL1);
+ irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1);
nops(4);
mb();
@@ -133,40 +78,40 @@ static inline u64 gic_read_iar_cavium_thunderx(void)
static inline void gic_write_pmr(u32 val)
{
- write_sysreg_s(val, ICC_PMR_EL1);
+ write_sysreg_s(val, SYS_ICC_PMR_EL1);
}
static inline void gic_write_ctlr(u32 val)
{
- write_sysreg_s(val, ICC_CTLR_EL1);
+ write_sysreg_s(val, SYS_ICC_CTLR_EL1);
isb();
}
static inline void gic_write_grpen1(u32 val)
{
- write_sysreg_s(val, ICC_GRPEN1_EL1);
+ write_sysreg_s(val, SYS_ICC_GRPEN1_EL1);
isb();
}
static inline void gic_write_sgi1r(u64 val)
{
- write_sysreg_s(val, ICC_SGI1R_EL1);
+ write_sysreg_s(val, SYS_ICC_SGI1R_EL1);
}
static inline u32 gic_read_sre(void)
{
- return read_sysreg_s(ICC_SRE_EL1);
+ return read_sysreg_s(SYS_ICC_SRE_EL1);
}
static inline void gic_write_sre(u32 val)
{
- write_sysreg_s(val, ICC_SRE_EL1);
+ write_sysreg_s(val, SYS_ICC_SRE_EL1);
isb();
}
static inline void gic_write_bpr1(u32 val)
{
- asm volatile("msr_s " __stringify(ICC_BPR1_EL1) ", %0" : : "r" (val));
+ write_sysreg_s(val, SYS_ICC_BPR1_EL1);
}
#define gic_read_typer(c) readq_relaxed(c)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3498d02..9dc30bc 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -149,9 +149,20 @@
#define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1)
#define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2)
+#define SYS_ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0)
+
#define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1)
#define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2)
+#define SYS_ICC_DIR_EL1 sys_reg(3, 0, 12, 11, 1)
+#define SYS_ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5)
+#define SYS_ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0)
+#define SYS_ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1)
+#define SYS_ICC_BPR1_EL1 sys_reg(3, 0, 12, 12, 3)
+#define SYS_ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4)
+#define SYS_ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5)
+#define SYS_ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7)
+
#define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1)
#define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7)
@@ -179,6 +190,47 @@
#define SYS_PMCCFILTR_EL0 sys_reg (3, 3, 14, 15, 7)
+#define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
+#define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0)
+#define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1)
+#define SYS_ICH_AP0R2_EL2 __SYS__AP0Rx_EL2(2)
+#define SYS_ICH_AP0R3_EL2 __SYS__AP0Rx_EL2(3)
+
+#define __SYS__AP1Rx_EL2(x) sys_reg(3, 4, 12, 9, x)
+#define SYS_ICH_AP1R0_EL2 __SYS__AP1Rx_EL2(0)
+#define SYS_ICH_AP1R1_EL2 __SYS__AP1Rx_EL2(1)
+#define SYS_ICH_AP1R2_EL2 __SYS__AP1Rx_EL2(2)
+#define SYS_ICH_AP1R3_EL2 __SYS__AP1Rx_EL2(3)
+
+#define SYS_ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4)
+#define SYS_ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5)
+#define SYS_ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0)
+#define SYS_ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1)
+#define SYS_ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2)
+#define SYS_ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3)
+#define SYS_ICH_ELSR_EL2 sys_reg(3, 4, 12, 11, 5)
+#define SYS_ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7)
+
+#define __SYS__LR0_EL2(x) sys_reg(3, 4, 12, 12, x)
+#define SYS_ICH_LR0_EL2 __SYS__LR0_EL2(0)
+#define SYS_ICH_LR1_EL2 __SYS__LR0_EL2(1)
+#define SYS_ICH_LR2_EL2 __SYS__LR0_EL2(2)
+#define SYS_ICH_LR3_EL2 __SYS__LR0_EL2(3)
+#define SYS_ICH_LR4_EL2 __SYS__LR0_EL2(4)
+#define SYS_ICH_LR5_EL2 __SYS__LR0_EL2(5)
+#define SYS_ICH_LR6_EL2 __SYS__LR0_EL2(6)
+#define SYS_ICH_LR7_EL2 __SYS__LR0_EL2(7)
+
+#define __SYS__LR8_EL2(x) sys_reg(3, 4, 12, 13, x)
+#define SYS_ICH_LR8_EL2 __SYS__LR8_EL2(0)
+#define SYS_ICH_LR9_EL2 __SYS__LR8_EL2(1)
+#define SYS_ICH_LR10_EL2 __SYS__LR8_EL2(2)
+#define SYS_ICH_LR11_EL2 __SYS__LR8_EL2(3)
+#define SYS_ICH_LR12_EL2 __SYS__LR8_EL2(4)
+#define SYS_ICH_LR13_EL2 __SYS__LR8_EL2(5)
+#define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6)
+#define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_EE (1 << 25)
#define SCTLR_ELx_I (1 << 12)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 4fb6ccd..95ae40ac 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -594,14 +594,14 @@ set_hcr:
cmp x0, #1
b.ne 3f
- mrs_s x0, ICC_SRE_EL2
+ mrs_s x0, SYS_ICC_SRE_EL2
orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1
orr x0, x0, #ICC_SRE_EL2_ENABLE // Set ICC_SRE_EL2.Enable==1
- msr_s ICC_SRE_EL2, x0
+ msr_s SYS_ICC_SRE_EL2, x0
isb // Make sure SRE is now set
- mrs_s x0, ICC_SRE_EL2 // Read SRE back,
+ mrs_s x0, SYS_ICC_SRE_EL2 // Read SRE back,
tbz x0, #0, 3f // and check that it sticks
- msr_s ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults
+ msr_s SYS_ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults
3:
#endif
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 05/79] arm64: sysreg: add physical timer registers
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (3 preceding siblings ...)
2017-04-23 17:08 ` [PULL 04/79] arm64: sysreg: subsume GICv3 sysreg definitions Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 06/79] arm64: sysreg: add register encodings used by KVM Christoffer Dall
` (74 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
This patch adds sysreg definitions for system registers used to control
the architected physical timer. Subsequent patches will make use of
these definitions.
The encodings were taken from ARM DDI 0487A.k_iss10775, Table C5-6.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/sysreg.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 9dc30bc..3e281b1 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -182,6 +182,10 @@
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
+#define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0)
+#define SYS_CNTP_CTL_EL0 sys_reg(3, 3, 14, 2, 1)
+#define SYS_CNTP_CVAL_EL0 sys_reg(3, 3, 14, 2, 2)
+
#define __PMEV_op2(n) ((n) & 0x7)
#define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3))
#define SYS_PMEVCNTRn_EL0(n) sys_reg(3, 3, 14, __CNTR_CRm(n), __PMEV_op2(n))
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 06/79] arm64: sysreg: add register encodings used by KVM
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (4 preceding siblings ...)
2017-04-23 17:08 ` [PULL 05/79] arm64: sysreg: add physical timer registers Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 07/79] arm64: sysreg: add Set/Way sys encodings Christoffer Dall
` (73 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
This patch adds sysreg definitions for registers which KVM needs the
encodings for, which are not currently describe in <asm/sysregs.h>.
Subsequent patches will make use of these definitions.
The encodings were taken from ARM DDI 0487A.k_iss10775, Table C5-6, but
this is not an exhaustive addition. Additions are only made for
registers used today by KVM.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/sysreg.h | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3e281b1..f623320 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -119,6 +119,7 @@
#define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0)
#define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1)
#define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2)
+#define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3)
#define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4)
#define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5)
#define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6)
@@ -149,11 +150,30 @@
#define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1)
#define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2)
+#define SYS_SCTLR_EL1 sys_reg(3, 0, 1, 0, 0)
+#define SYS_ACTLR_EL1 sys_reg(3, 0, 1, 0, 1)
+#define SYS_CPACR_EL1 sys_reg(3, 0, 1, 0, 2)
+
+#define SYS_TTBR0_EL1 sys_reg(3, 0, 2, 0, 0)
+#define SYS_TTBR1_EL1 sys_reg(3, 0, 2, 0, 1)
+#define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2)
+
#define SYS_ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0)
+#define SYS_AFSR0_EL1 sys_reg(3, 0, 5, 1, 0)
+#define SYS_AFSR1_EL1 sys_reg(3, 0, 5, 1, 1)
+#define SYS_ESR_EL1 sys_reg(3, 0, 5, 2, 0)
+#define SYS_FAR_EL1 sys_reg(3, 0, 6, 0, 0)
+#define SYS_PAR_EL1 sys_reg(3, 0, 7, 4, 0)
+
#define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1)
#define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2)
+#define SYS_MAIR_EL1 sys_reg(3, 0, 10, 2, 0)
+#define SYS_AMAIR_EL1 sys_reg(3, 0, 10, 3, 0)
+
+#define SYS_VBAR_EL1 sys_reg(3, 0, 12, 0, 0)
+
#define SYS_ICC_DIR_EL1 sys_reg(3, 0, 12, 11, 1)
#define SYS_ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5)
#define SYS_ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0)
@@ -163,6 +183,16 @@
#define SYS_ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5)
#define SYS_ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7)
+#define SYS_CONTEXTIDR_EL1 sys_reg(3, 0, 13, 0, 1)
+#define SYS_TPIDR_EL1 sys_reg(3, 0, 13, 0, 4)
+
+#define SYS_CNTKCTL_EL1 sys_reg(3, 0, 14, 1, 0)
+
+#define SYS_CLIDR_EL1 sys_reg(3, 1, 0, 0, 1)
+#define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7)
+
+#define SYS_CSSELR_EL1 sys_reg(3, 2, 0, 0, 0)
+
#define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1)
#define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7)
@@ -180,6 +210,9 @@
#define SYS_PMUSERENR_EL0 sys_reg(3, 3, 9, 14, 0)
#define SYS_PMOVSSET_EL0 sys_reg(3, 3, 9, 14, 3)
+#define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2)
+#define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3)
+
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0)
@@ -194,6 +227,10 @@
#define SYS_PMCCFILTR_EL0 sys_reg (3, 3, 14, 15, 7)
+#define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0)
+#define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1)
+#define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0)
+
#define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
#define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0)
#define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 07/79] arm64: sysreg: add Set/Way sys encodings
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (5 preceding siblings ...)
2017-04-23 17:08 ` [PULL 06/79] arm64: sysreg: add register encodings used by KVM Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 08/79] KVM: arm64: add SYS_DESC() Christoffer Dall
` (72 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Cache maintenance ops fall in the SYS instruction class, and KVM needs
to handle them. So as to keep all SYS encodings in one place, this
patch adds them to sysreg.h.
The encodings were taken from ARM DDI 0487A.k_iss10775, Table C5-2.
To make it clear that these are instructions rather than registers, and
to allow us to change the way these are handled in future, a new
sys_insn() alias for sys_reg() is added and used for these new
definitions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/sysreg.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index f623320..128eae8 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -48,6 +48,8 @@
((crn) << CRn_shift) | ((crm) << CRm_shift) | \
((op2) << Op2_shift))
+#define sys_insn sys_reg
+
#define sys_reg_Op0(id) (((id) >> Op0_shift) & Op0_mask)
#define sys_reg_Op1(id) (((id) >> Op1_shift) & Op1_mask)
#define sys_reg_CRn(id) (((id) >> CRn_shift) & CRn_mask)
@@ -89,6 +91,10 @@
#define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \
(!!x)<<8 | 0x1f)
+#define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2)
+#define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2)
+#define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2)
+
#define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2)
#define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0)
#define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 08/79] KVM: arm64: add SYS_DESC()
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (6 preceding siblings ...)
2017-04-23 17:08 ` [PULL 07/79] arm64: sysreg: add Set/Way sys encodings Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 09/79] KVM: arm64: Use common debug sysreg definitions Christoffer Dall
` (71 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
This patch adds a macro enabling us to initialise sys_reg_desc
structures based on common sysreg encoding definitions in
<asm/sysreg.h>. Subsequent patches will use this to simplify the KVM
code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
---
arch/arm64/kvm/sys_regs.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9c6ffd0..66859a5 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -147,4 +147,9 @@ const struct sys_reg_desc *find_reg_by_id(u64 id,
#define CRm(_x) .CRm = _x
#define Op2(_x) .Op2 = _x
+#define SYS_DESC(reg) \
+ Op0(sys_reg_Op0(reg)), Op1(sys_reg_Op1(reg)), \
+ CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \
+ Op2(sys_reg_Op2(reg))
+
#endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 09/79] KVM: arm64: Use common debug sysreg definitions
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (7 preceding siblings ...)
2017-04-23 17:08 ` [PULL 08/79] KVM: arm64: add SYS_DESC() Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 10/79] KVM: arm64: Use common performance monitor " Christoffer Dall
` (70 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Now that we have common definitions for the debug register encodings,
make the KVM code use these, simplifying the sys_reg_descs table.
The table previously erroneously referred to MDCCSR_EL0 as MDCCSR_EL1.
This is corrected (as is necessary in order to use the common sysreg
definition).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
---
arch/arm64/kvm/sys_regs.c | 73 ++++++++++++++---------------------------------
1 file changed, 21 insertions(+), 52 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 0e26f8c..5fa23fd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -793,17 +793,13 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
/* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
#define DBG_BCR_BVR_WCR_WVR_EL1(n) \
- /* DBGBVRn_EL1 */ \
- { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b100), \
+ { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
trap_bvr, reset_bvr, n, 0, get_bvr, set_bvr }, \
- /* DBGBCRn_EL1 */ \
- { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b101), \
+ { SYS_DESC(SYS_DBGBCRn_EL1(n)), \
trap_bcr, reset_bcr, n, 0, get_bcr, set_bcr }, \
- /* DBGWVRn_EL1 */ \
- { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b110), \
+ { SYS_DESC(SYS_DBGWVRn_EL1(n)), \
trap_wvr, reset_wvr, n, 0, get_wvr, set_wvr }, \
- /* DBGWCRn_EL1 */ \
- { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111), \
+ { SYS_DESC(SYS_DBGWCRn_EL1(n)), \
trap_wcr, reset_wcr, n, 0, get_wcr, set_wcr }
/* Macro to expand the PMEVCNTRn_EL0 register */
@@ -899,12 +895,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
DBG_BCR_BVR_WCR_WVR_EL1(0),
DBG_BCR_BVR_WCR_WVR_EL1(1),
- /* MDCCINT_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
- trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
- /* MDSCR_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
- trap_debug_regs, reset_val, MDSCR_EL1, 0 },
+ { SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
+ { SYS_DESC(SYS_MDSCR_EL1), trap_debug_regs, reset_val, MDSCR_EL1, 0 },
DBG_BCR_BVR_WCR_WVR_EL1(2),
DBG_BCR_BVR_WCR_WVR_EL1(3),
DBG_BCR_BVR_WCR_WVR_EL1(4),
@@ -920,44 +912,21 @@ static const struct sys_reg_desc sys_reg_descs[] = {
DBG_BCR_BVR_WCR_WVR_EL1(14),
DBG_BCR_BVR_WCR_WVR_EL1(15),
- /* MDRAR_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
- trap_raz_wi },
- /* OSLAR_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b100),
- trap_raz_wi },
- /* OSLSR_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0001), Op2(0b100),
- trap_oslsr_el1 },
- /* OSDLR_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0011), Op2(0b100),
- trap_raz_wi },
- /* DBGPRCR_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0100), Op2(0b100),
- trap_raz_wi },
- /* DBGCLAIMSET_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0111), CRm(0b1000), Op2(0b110),
- trap_raz_wi },
- /* DBGCLAIMCLR_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0111), CRm(0b1001), Op2(0b110),
- trap_raz_wi },
- /* DBGAUTHSTATUS_EL1 */
- { Op0(0b10), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b110),
- trap_dbgauthstatus_el1 },
-
- /* MDCCSR_EL1 */
- { Op0(0b10), Op1(0b011), CRn(0b0000), CRm(0b0001), Op2(0b000),
- trap_raz_wi },
- /* DBGDTR_EL0 */
- { Op0(0b10), Op1(0b011), CRn(0b0000), CRm(0b0100), Op2(0b000),
- trap_raz_wi },
- /* DBGDTR[TR]X_EL0 */
- { Op0(0b10), Op1(0b011), CRn(0b0000), CRm(0b0101), Op2(0b000),
- trap_raz_wi },
-
- /* DBGVCR32_EL2 */
- { Op0(0b10), Op1(0b100), CRn(0b0000), CRm(0b0111), Op2(0b000),
- NULL, reset_val, DBGVCR32_EL2, 0 },
+ { SYS_DESC(SYS_MDRAR_EL1), trap_raz_wi },
+ { SYS_DESC(SYS_OSLAR_EL1), trap_raz_wi },
+ { SYS_DESC(SYS_OSLSR_EL1), trap_oslsr_el1 },
+ { SYS_DESC(SYS_OSDLR_EL1), trap_raz_wi },
+ { SYS_DESC(SYS_DBGPRCR_EL1), trap_raz_wi },
+ { SYS_DESC(SYS_DBGCLAIMSET_EL1), trap_raz_wi },
+ { SYS_DESC(SYS_DBGCLAIMCLR_EL1), trap_raz_wi },
+ { SYS_DESC(SYS_DBGAUTHSTATUS_EL1), trap_dbgauthstatus_el1 },
+
+ { SYS_DESC(SYS_MDCCSR_EL0), trap_raz_wi },
+ { SYS_DESC(SYS_DBGDTR_EL0), trap_raz_wi },
+ // DBGDTR[TR]X_EL0 share the same encoding
+ { SYS_DESC(SYS_DBGDTRTX_EL0), trap_raz_wi },
+
+ { SYS_DESC(SYS_DBGVCR32_EL2), NULL, reset_val, DBGVCR32_EL2, 0 },
/* MPIDR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 10/79] KVM: arm64: Use common performance monitor sysreg definitions
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (8 preceding siblings ...)
2017-04-23 17:08 ` [PULL 09/79] KVM: arm64: Use common debug sysreg definitions Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 11/79] KVM: arm64: Use common GICv3 " Christoffer Dall
` (69 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Now that we have common definitions for the performance monitor register
encodings, make the KVM code use these, simplifying the sys_reg_descs
table.
The comments for PMUSERENR_EL0 and PMCCFILTR_EL0 are kept, as these
describe non-obvious details regarding the registers. However, a slight
fixup is applied to bring these into line with the usual comment style.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
---
arch/arm64/kvm/sys_regs.c | 78 +++++++++++++----------------------------------
1 file changed, 22 insertions(+), 56 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5fa23fd..63b0785 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -804,16 +804,12 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
/* Macro to expand the PMEVCNTRn_EL0 register */
#define PMU_PMEVCNTR_EL0(n) \
- /* PMEVCNTRn_EL0 */ \
- { Op0(0b11), Op1(0b011), CRn(0b1110), \
- CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ { SYS_DESC(SYS_PMEVCNTRn_EL0(n)), \
access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), }
/* Macro to expand the PMEVTYPERn_EL0 register */
#define PMU_PMEVTYPER_EL0(n) \
- /* PMEVTYPERn_EL0 */ \
- { Op0(0b11), Op1(0b011), CRn(0b1110), \
- CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ { SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \
access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
static bool access_cntp_tval(struct kvm_vcpu *vcpu,
@@ -963,12 +959,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ Op0(0b11), Op1(0b000), CRn(0b0111), CRm(0b0100), Op2(0b000),
NULL, reset_unknown, PAR_EL1 },
- /* PMINTENSET_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
- access_pminten, reset_unknown, PMINTENSET_EL1 },
- /* PMINTENCLR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
- access_pminten, NULL, PMINTENSET_EL1 },
+ { SYS_DESC(SYS_PMINTENSET_EL1), access_pminten, reset_unknown, PMINTENSET_EL1 },
+ { SYS_DESC(SYS_PMINTENCLR_EL1), access_pminten, NULL, PMINTENSET_EL1 },
/* MAIR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1003,48 +995,23 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000),
NULL, reset_unknown, CSSELR_EL1 },
- /* PMCR_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
- access_pmcr, reset_pmcr, },
- /* PMCNTENSET_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
- access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
- /* PMCNTENCLR_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
- access_pmcnten, NULL, PMCNTENSET_EL0 },
- /* PMOVSCLR_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
- access_pmovs, NULL, PMOVSSET_EL0 },
- /* PMSWINC_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
- access_pmswinc, reset_unknown, PMSWINC_EL0 },
- /* PMSELR_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
- access_pmselr, reset_unknown, PMSELR_EL0 },
- /* PMCEID0_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
- access_pmceid },
- /* PMCEID1_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
- access_pmceid },
- /* PMCCNTR_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
- access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 },
- /* PMXEVTYPER_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
- access_pmu_evtyper },
- /* PMXEVCNTR_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
- access_pmu_evcntr },
- /* PMUSERENR_EL0
- * This register resets as unknown in 64bit mode while it resets as zero
+ { SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, },
+ { SYS_DESC(SYS_PMCNTENSET_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
+ { SYS_DESC(SYS_PMCNTENCLR_EL0), access_pmcnten, NULL, PMCNTENSET_EL0 },
+ { SYS_DESC(SYS_PMOVSCLR_EL0), access_pmovs, NULL, PMOVSSET_EL0 },
+ { SYS_DESC(SYS_PMSWINC_EL0), access_pmswinc, reset_unknown, PMSWINC_EL0 },
+ { SYS_DESC(SYS_PMSELR_EL0), access_pmselr, reset_unknown, PMSELR_EL0 },
+ { SYS_DESC(SYS_PMCEID0_EL0), access_pmceid },
+ { SYS_DESC(SYS_PMCEID1_EL0), access_pmceid },
+ { SYS_DESC(SYS_PMCCNTR_EL0), access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 },
+ { SYS_DESC(SYS_PMXEVTYPER_EL0), access_pmu_evtyper },
+ { SYS_DESC(SYS_PMXEVCNTR_EL0), access_pmu_evcntr },
+ /*
+ * PMUSERENR_EL0 resets as unknown in 64bit mode while it resets as zero
* in 32bit mode. Here we choose to reset it as zero for consistency.
*/
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
- access_pmuserenr, reset_val, PMUSERENR_EL0, 0 },
- /* PMOVSSET_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
- access_pmovs, reset_unknown, PMOVSSET_EL0 },
+ { SYS_DESC(SYS_PMUSERENR_EL0), access_pmuserenr, reset_val, PMUSERENR_EL0, 0 },
+ { SYS_DESC(SYS_PMOVSSET_EL0), access_pmovs, reset_unknown, PMOVSSET_EL0 },
/* TPIDR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
@@ -1127,12 +1094,11 @@ static const struct sys_reg_desc sys_reg_descs[] = {
PMU_PMEVTYPER_EL0(28),
PMU_PMEVTYPER_EL0(29),
PMU_PMEVTYPER_EL0(30),
- /* PMCCFILTR_EL0
- * This register resets as unknown in 64bit mode while it resets as zero
+ /*
+ * PMCCFILTR_EL0 resets as unknown in 64bit mode while it resets as zero
* in 32bit mode. Here we choose to reset it as zero for consistency.
*/
- { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
- access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 },
+ { SYS_DESC(SYS_PMCCFILTR_EL0), access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 },
/* DACR32_EL2 */
{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 11/79] KVM: arm64: Use common GICv3 sysreg definitions
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (9 preceding siblings ...)
2017-04-23 17:08 ` [PULL 10/79] KVM: arm64: Use common performance monitor " Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 12/79] KVM: arm64: Use common physical timer " Christoffer Dall
` (68 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Now that we have common definitions for the GICv3 register encodings,
make the KVM code use these, simplifying the sys_reg_descs table.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
---
arch/arm64/kvm/sys_regs.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 63b0785..1f3062b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -973,12 +973,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
NULL, reset_val, VBAR_EL1, 0 },
- /* ICC_SGI1R_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b1011), Op2(0b101),
- access_gic_sgi },
- /* ICC_SRE_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b1100), Op2(0b101),
- access_gic_sre },
+ { SYS_DESC(SYS_ICC_SGI1R_EL1), access_gic_sgi },
+ { SYS_DESC(SYS_ICC_SRE_EL1), access_gic_sre },
/* CONTEXTIDR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 12/79] KVM: arm64: Use common physical timer sysreg definitions
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (10 preceding siblings ...)
2017-04-23 17:08 ` [PULL 11/79] KVM: arm64: Use common GICv3 " Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 13/79] KVM: arm64: use common invariant " Christoffer Dall
` (67 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Now that we have common definitions for the physical timer control
registers, make the KVM code use these, simplifying the sys_reg_descs
table.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
---
arch/arm64/kvm/sys_regs.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1f3062b..860707f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1016,15 +1016,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
NULL, reset_unknown, TPIDRRO_EL0 },
- /* CNTP_TVAL_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b000),
- access_cntp_tval },
- /* CNTP_CTL_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b001),
- access_cntp_ctl },
- /* CNTP_CVAL_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b010),
- access_cntp_cval },
+ { SYS_DESC(SYS_CNTP_TVAL_EL0), access_cntp_tval },
+ { SYS_DESC(SYS_CNTP_CTL_EL0), access_cntp_ctl },
+ { SYS_DESC(SYS_CNTP_CVAL_EL0), access_cntp_cval },
/* PMEVCNTRn_EL0 */
PMU_PMEVCNTR_EL0(0),
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 13/79] KVM: arm64: use common invariant sysreg definitions
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (11 preceding siblings ...)
2017-04-23 17:08 ` [PULL 12/79] KVM: arm64: Use common physical timer " Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 14/79] KVM: arm64: Use common " Christoffer Dall
` (66 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Now that we have common definitions for the register encodings used by
KVM, make the KVM code uses thse for invariant sysreg definitions. This
makes said definitions a reasonable amount shorter, especially as many
comments are rendered redundant and can be removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
---
arch/arm64/kvm/sys_regs.c | 57 ++++++++++++++++-------------------------------
1 file changed, 19 insertions(+), 38 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 860707f..e637e1d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1857,44 +1857,25 @@ FUNCTION_INVARIANT(aidr_el1)
/* ->val is filled in by kvm_sys_reg_table_init() */
static struct sys_reg_desc invariant_sys_regs[] = {
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
- NULL, get_midr_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
- NULL, get_revidr_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
- NULL, get_id_pfr0_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
- NULL, get_id_pfr1_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
- NULL, get_id_dfr0_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
- NULL, get_id_afr0_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
- NULL, get_id_mmfr0_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
- NULL, get_id_mmfr1_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
- NULL, get_id_mmfr2_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
- NULL, get_id_mmfr3_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
- NULL, get_id_isar0_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
- NULL, get_id_isar1_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
- NULL, get_id_isar2_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
- NULL, get_id_isar3_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
- NULL, get_id_isar4_el1 },
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
- NULL, get_id_isar5_el1 },
- { Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
- NULL, get_clidr_el1 },
- { Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
- NULL, get_aidr_el1 },
- { Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
- NULL, get_ctr_el0 },
+ { SYS_DESC(SYS_MIDR_EL1), NULL, get_midr_el1 },
+ { SYS_DESC(SYS_REVIDR_EL1), NULL, get_revidr_el1 },
+ { SYS_DESC(SYS_ID_PFR0_EL1), NULL, get_id_pfr0_el1 },
+ { SYS_DESC(SYS_ID_PFR1_EL1), NULL, get_id_pfr1_el1 },
+ { SYS_DESC(SYS_ID_DFR0_EL1), NULL, get_id_dfr0_el1 },
+ { SYS_DESC(SYS_ID_AFR0_EL1), NULL, get_id_afr0_el1 },
+ { SYS_DESC(SYS_ID_MMFR0_EL1), NULL, get_id_mmfr0_el1 },
+ { SYS_DESC(SYS_ID_MMFR1_EL1), NULL, get_id_mmfr1_el1 },
+ { SYS_DESC(SYS_ID_MMFR2_EL1), NULL, get_id_mmfr2_el1 },
+ { SYS_DESC(SYS_ID_MMFR3_EL1), NULL, get_id_mmfr3_el1 },
+ { SYS_DESC(SYS_ID_ISAR0_EL1), NULL, get_id_isar0_el1 },
+ { SYS_DESC(SYS_ID_ISAR1_EL1), NULL, get_id_isar1_el1 },
+ { SYS_DESC(SYS_ID_ISAR2_EL1), NULL, get_id_isar2_el1 },
+ { SYS_DESC(SYS_ID_ISAR3_EL1), NULL, get_id_isar3_el1 },
+ { SYS_DESC(SYS_ID_ISAR4_EL1), NULL, get_id_isar4_el1 },
+ { SYS_DESC(SYS_ID_ISAR5_EL1), NULL, get_id_isar5_el1 },
+ { SYS_DESC(SYS_CLIDR_EL1), NULL, get_clidr_el1 },
+ { SYS_DESC(SYS_AIDR_EL1), NULL, get_aidr_el1 },
+ { SYS_DESC(SYS_CTR_EL0), NULL, get_ctr_el0 },
};
static int reg_from_user(u64 *val, const void __user *uaddr, u64 id)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 14/79] KVM: arm64: Use common sysreg definitions
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (12 preceding siblings ...)
2017-04-23 17:08 ` [PULL 13/79] KVM: arm64: use common invariant " Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 15/79] KVM: arm64: Use common Set/Way sys definitions Christoffer Dall
` (65 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Now that we have common definitions for the remaining register encodings
required by KVM, make the KVM code use these, simplifying the
sys_reg_descs table and the genericv8_sys_regs table.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
---
arch/arm64/kvm/sys_regs.c | 94 +++++++++---------------------------
arch/arm64/kvm/sys_regs_generic_v8.c | 4 +-
2 files changed, 25 insertions(+), 73 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e637e1d..effa5ce 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -924,72 +924,36 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_DBGVCR32_EL2), NULL, reset_val, DBGVCR32_EL2, 0 },
- /* MPIDR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
- NULL, reset_mpidr, MPIDR_EL1 },
- /* SCTLR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
- access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
- /* CPACR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010),
- NULL, reset_val, CPACR_EL1, 0 },
- /* TTBR0_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
- access_vm_reg, reset_unknown, TTBR0_EL1 },
- /* TTBR1_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
- access_vm_reg, reset_unknown, TTBR1_EL1 },
- /* TCR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
- access_vm_reg, reset_val, TCR_EL1, 0 },
-
- /* AFSR0_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
- access_vm_reg, reset_unknown, AFSR0_EL1 },
- /* AFSR1_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
- access_vm_reg, reset_unknown, AFSR1_EL1 },
- /* ESR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
- access_vm_reg, reset_unknown, ESR_EL1 },
- /* FAR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
- access_vm_reg, reset_unknown, FAR_EL1 },
- /* PAR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0111), CRm(0b0100), Op2(0b000),
- NULL, reset_unknown, PAR_EL1 },
+ { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 },
+ { SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
+ { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 },
+ { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 },
+ { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
+ { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
+
+ { SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
+ { SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
+ { SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
+ { SYS_DESC(SYS_FAR_EL1), access_vm_reg, reset_unknown, FAR_EL1 },
+ { SYS_DESC(SYS_PAR_EL1), NULL, reset_unknown, PAR_EL1 },
{ SYS_DESC(SYS_PMINTENSET_EL1), access_pminten, reset_unknown, PMINTENSET_EL1 },
{ SYS_DESC(SYS_PMINTENCLR_EL1), access_pminten, NULL, PMINTENSET_EL1 },
- /* MAIR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
- access_vm_reg, reset_unknown, MAIR_EL1 },
- /* AMAIR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
- access_vm_reg, reset_amair_el1, AMAIR_EL1 },
+ { SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 },
+ { SYS_DESC(SYS_AMAIR_EL1), access_vm_reg, reset_amair_el1, AMAIR_EL1 },
- /* VBAR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
- NULL, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_VBAR_EL1), NULL, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_ICC_SGI1R_EL1), access_gic_sgi },
{ SYS_DESC(SYS_ICC_SRE_EL1), access_gic_sre },
- /* CONTEXTIDR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
- access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
- /* TPIDR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
- NULL, reset_unknown, TPIDR_EL1 },
+ { SYS_DESC(SYS_CONTEXTIDR_EL1), access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
+ { SYS_DESC(SYS_TPIDR_EL1), NULL, reset_unknown, TPIDR_EL1 },
- /* CNTKCTL_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b1110), CRm(0b0001), Op2(0b000),
- NULL, reset_val, CNTKCTL_EL1, 0},
+ { SYS_DESC(SYS_CNTKCTL_EL1), NULL, reset_val, CNTKCTL_EL1, 0},
- /* CSSELR_EL1 */
- { Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000),
- NULL, reset_unknown, CSSELR_EL1 },
+ { SYS_DESC(SYS_CSSELR_EL1), NULL, reset_unknown, CSSELR_EL1 },
{ SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, },
{ SYS_DESC(SYS_PMCNTENSET_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
@@ -1009,12 +973,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_PMUSERENR_EL0), access_pmuserenr, reset_val, PMUSERENR_EL0, 0 },
{ SYS_DESC(SYS_PMOVSSET_EL0), access_pmovs, reset_unknown, PMOVSSET_EL0 },
- /* TPIDR_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
- NULL, reset_unknown, TPIDR_EL0 },
- /* TPIDRRO_EL0 */
- { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
- NULL, reset_unknown, TPIDRRO_EL0 },
+ { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 },
+ { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 },
{ SYS_DESC(SYS_CNTP_TVAL_EL0), access_cntp_tval },
{ SYS_DESC(SYS_CNTP_CTL_EL0), access_cntp_ctl },
@@ -1090,15 +1050,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
*/
{ SYS_DESC(SYS_PMCCFILTR_EL0), access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 },
- /* DACR32_EL2 */
- { Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
- NULL, reset_unknown, DACR32_EL2 },
- /* IFSR32_EL2 */
- { Op0(0b11), Op1(0b100), CRn(0b0101), CRm(0b0000), Op2(0b001),
- NULL, reset_unknown, IFSR32_EL2 },
- /* FPEXC32_EL2 */
- { Op0(0b11), Op1(0b100), CRn(0b0101), CRm(0b0011), Op2(0b000),
- NULL, reset_val, FPEXC32_EL2, 0x70 },
+ { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
+ { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
+ { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
};
static bool trap_dbgidr(struct kvm_vcpu *vcpu,
diff --git a/arch/arm64/kvm/sys_regs_generic_v8.c b/arch/arm64/kvm/sys_regs_generic_v8.c
index 46af718..969ade1 100644
--- a/arch/arm64/kvm/sys_regs_generic_v8.c
+++ b/arch/arm64/kvm/sys_regs_generic_v8.c
@@ -52,9 +52,7 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
*/
static const struct sys_reg_desc genericv8_sys_regs[] = {
- /* ACTLR_EL1 */
- { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b001),
- access_actlr, reset_actlr, ACTLR_EL1 },
+ { SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 },
};
static const struct sys_reg_desc genericv8_cp15_regs[] = {
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 15/79] KVM: arm64: Use common Set/Way sys definitions
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (13 preceding siblings ...)
2017-04-23 17:08 ` [PULL 14/79] KVM: arm64: Use common " Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 16/79] kvm: arm/arm64: Rework gpa callback handlers Christoffer Dall
` (64 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com>
Now that we have common definitions for the encoding of Set/Way cache
maintenance operations, make the KVM code use these, simplifying the
sys_reg_descs table.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm at lists.cs.columbia.edu
---
arch/arm64/kvm/sys_regs.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index effa5ce..0e6c477 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -879,15 +879,9 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
* more demanding guest...
*/
static const struct sys_reg_desc sys_reg_descs[] = {
- /* DC ISW */
- { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b0110), Op2(0b010),
- access_dcsw },
- /* DC CSW */
- { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1010), Op2(0b010),
- access_dcsw },
- /* DC CISW */
- { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010),
- access_dcsw },
+ { SYS_DESC(SYS_DC_ISW), access_dcsw },
+ { SYS_DESC(SYS_DC_CSW), access_dcsw },
+ { SYS_DESC(SYS_DC_CISW), access_dcsw },
DBG_BCR_BVR_WCR_WVR_EL1(0),
DBG_BCR_BVR_WCR_WVR_EL1(1),
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 16/79] kvm: arm/arm64: Rework gpa callback handlers
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (14 preceding siblings ...)
2017-04-23 17:08 ` [PULL 15/79] KVM: arm64: Use common Set/Way sys definitions Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 17/79] KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put Christoffer Dall
` (63 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Suzuki K Poulose <suzuki.poulose@arm.com>
In order to perform an operation on a gpa range, we currently iterate
over each page in a user memory slot for the given range. This is
inefficient while dealing with a big range (e.g, a VMA), especially
while unmaping a range. At present, with stage2 unmap on a range with
a hugepage backed region, we clear the PMD when we unmap the first
page in the loop. The remaining iterations simply traverse the page table
down to the PMD level only to see that nothing is in there.
This patch reworks the code to invoke the callback handlers on the
biggest range possible within the memory slot to to reduce the number of
times the handler is called.
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kvm/mmu.c | 31 +++++++++++++------------------
1 file changed, 13 insertions(+), 18 deletions(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 962616f..69554bd 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1512,7 +1512,8 @@ static int handle_hva_to_gpa(struct kvm *kvm,
unsigned long start,
unsigned long end,
int (*handler)(struct kvm *kvm,
- gpa_t gpa, void *data),
+ gpa_t gpa, u64 size,
+ void *data),
void *data)
{
struct kvm_memslots *slots;
@@ -1524,7 +1525,7 @@ static int handle_hva_to_gpa(struct kvm *kvm,
/* we only care about the pages that the guest sees */
kvm_for_each_memslot(memslot, slots) {
unsigned long hva_start, hva_end;
- gfn_t gfn, gfn_end;
+ gfn_t gpa;
hva_start = max(start, memslot->userspace_addr);
hva_end = min(end, memslot->userspace_addr +
@@ -1532,25 +1533,16 @@ static int handle_hva_to_gpa(struct kvm *kvm,
if (hva_start >= hva_end)
continue;
- /*
- * {gfn(page) | page intersects with [hva_start, hva_end)} =
- * {gfn_start, gfn_start+1, ..., gfn_end-1}.
- */
- gfn = hva_to_gfn_memslot(hva_start, memslot);
- gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot);
-
- for (; gfn < gfn_end; ++gfn) {
- gpa_t gpa = gfn << PAGE_SHIFT;
- ret |= handler(kvm, gpa, data);
- }
+ gpa = hva_to_gfn_memslot(hva_start, memslot) << PAGE_SHIFT;
+ ret |= handler(kvm, gpa, (u64)(hva_end - hva_start), data);
}
return ret;
}
-static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
+static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
{
- unmap_stage2_range(kvm, gpa, PAGE_SIZE);
+ unmap_stage2_range(kvm, gpa, size);
return 0;
}
@@ -1577,10 +1569,11 @@ int kvm_unmap_hva_range(struct kvm *kvm,
return 0;
}
-static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, void *data)
+static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
{
pte_t *pte = (pte_t *)data;
+ WARN_ON(size != PAGE_SIZE);
/*
* We can always call stage2_set_pte with KVM_S2PTE_FLAG_LOGGING_ACTIVE
* flag clear because MMU notifiers will have unmapped a huge PMD before
@@ -1606,11 +1599,12 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &stage2_pte);
}
-static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
+static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
{
pmd_t *pmd;
pte_t *pte;
+ WARN_ON(size != PAGE_SIZE && size != PMD_SIZE);
pmd = stage2_get_pmd(kvm, NULL, gpa);
if (!pmd || pmd_none(*pmd)) /* Nothing there */
return 0;
@@ -1625,11 +1619,12 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
return stage2_ptep_test_and_clear_young(pte);
}
-static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
+static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
{
pmd_t *pmd;
pte_t *pte;
+ WARN_ON(size != PAGE_SIZE && size != PMD_SIZE);
pmd = stage2_get_pmd(kvm, NULL, gpa);
if (!pmd || pmd_none(*pmd)) /* Nothing there */
return 0;
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 17/79] KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (15 preceding siblings ...)
2017-04-23 17:08 ` [PULL 16/79] kvm: arm/arm64: Rework gpa callback handlers Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 18/79] KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no pending IRQ Christoffer Dall
` (62 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Christoffer Dall <cdall@cs.columbia.edu>
We don't have to save/restore the VMCR on every entry to/from the guest,
since on GICv2 we can access the control interface from EL1 and on VHE
systems with GICv3 we can access the control interface from KVM running
in EL2.
GICv3 systems without VHE becomes the rare case, which has to
save/restore the register on each round trip.
Note that userspace accesses may see out-of-date values if the VCPU is
running while accessing the VGIC state via the KVM device API, but this
is already the case and it is up to userspace to quiesce the CPUs before
reading the CPU registers from the GIC for an up-to-date view.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/kvm_asm.h | 3 +++
arch/arm/kvm/arm.c | 11 ++++++-----
arch/arm64/include/asm/kvm_asm.h | 2 ++
include/kvm/arm_vgic.h | 3 +++
virt/kvm/arm/hyp/vgic-v2-sr.c | 3 ---
virt/kvm/arm/hyp/vgic-v3-sr.c | 14 ++++++++++----
virt/kvm/arm/vgic/vgic-init.c | 12 ++++++++++++
virt/kvm/arm/vgic/vgic-v2.c | 24 ++++++++++++++++++++++--
virt/kvm/arm/vgic/vgic-v3.c | 22 ++++++++++++++++++++--
virt/kvm/arm/vgic/vgic.c | 22 ++++++++++++++++++++++
virt/kvm/arm/vgic/vgic.h | 6 ++++++
11 files changed, 106 insertions(+), 16 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 8ef0538..dd16044 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -75,7 +75,10 @@ extern void __init_stage2_translation(void);
extern void __kvm_hyp_reset(unsigned long);
extern u64 __vgic_v3_get_ich_vtr_el2(void);
+extern u64 __vgic_v3_read_vmcr(void);
+extern void __vgic_v3_write_vmcr(u32 vmcr);
extern void __vgic_v3_init_lrs(void);
+
#endif
#endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 96dba7c..46fd375 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -351,15 +351,14 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu->arch.host_cpu_context = this_cpu_ptr(kvm_host_cpu_state);
kvm_arm_set_running_vcpu(vcpu);
+
+ kvm_vgic_load(vcpu);
}
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
- /*
- * The arch-generic KVM code expects the cpu field of a vcpu to be -1
- * if the vcpu is no longer assigned to a cpu. This is used for the
- * optimized make_all_cpus_request path.
- */
+ kvm_vgic_put(vcpu);
+
vcpu->cpu = -1;
kvm_arm_set_running_vcpu(NULL);
@@ -633,7 +632,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
* non-preemptible context.
*/
preempt_disable();
+
kvm_pmu_flush_hwstate(vcpu);
+
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index ec3553eb..49f99cd 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -59,6 +59,8 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
extern u64 __vgic_v3_get_ich_vtr_el2(void);
+extern u64 __vgic_v3_read_vmcr(void);
+extern void __vgic_v3_write_vmcr(u32 vmcr);
extern void __vgic_v3_init_lrs(void);
extern u32 __kvm_get_mdcr_el2(void);
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index b72dd2a..f7a2e31 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -306,6 +306,9 @@ bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, unsigned int virt_irq);
int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
+void kvm_vgic_load(struct kvm_vcpu *vcpu);
+void kvm_vgic_put(struct kvm_vcpu *vcpu);
+
#define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel))
#define vgic_initialized(k) ((k)->arch.vgic.initialized)
#define vgic_ready(k) ((k)->arch.vgic.ready)
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index c8aeb7b..d3d3b9b 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -114,8 +114,6 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
if (!base)
return;
- cpu_if->vgic_vmcr = readl_relaxed(base + GICH_VMCR);
-
if (vcpu->arch.vgic_cpu.live_lrs) {
cpu_if->vgic_apr = readl_relaxed(base + GICH_APR);
@@ -165,7 +163,6 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
}
}
- writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
vcpu->arch.vgic_cpu.live_lrs = live_lrs;
}
diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c
index 3947095..e51ee7e 100644
--- a/virt/kvm/arm/hyp/vgic-v3-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v3-sr.c
@@ -159,8 +159,6 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
if (!cpu_if->vgic_sre)
dsb(st);
- cpu_if->vgic_vmcr = read_gicreg(ICH_VMCR_EL2);
-
if (vcpu->arch.vgic_cpu.live_lrs) {
int i;
u32 max_lr_idx, nr_pri_bits;
@@ -261,8 +259,6 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
live_lrs |= (1 << i);
}
- write_gicreg(cpu_if->vgic_vmcr, ICH_VMCR_EL2);
-
if (live_lrs) {
write_gicreg(cpu_if->vgic_hcr, ICH_HCR_EL2);
@@ -326,3 +322,13 @@ u64 __hyp_text __vgic_v3_get_ich_vtr_el2(void)
{
return read_gicreg(ICH_VTR_EL2);
}
+
+u64 __hyp_text __vgic_v3_read_vmcr(void)
+{
+ return read_gicreg(ICH_VMCR_EL2);
+}
+
+void __hyp_text __vgic_v3_write_vmcr(u32 vmcr)
+{
+ write_gicreg(vmcr, ICH_VMCR_EL2);
+}
diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
index 276139a..e8e973b 100644
--- a/virt/kvm/arm/vgic/vgic-init.c
+++ b/virt/kvm/arm/vgic/vgic-init.c
@@ -262,6 +262,18 @@ int vgic_init(struct kvm *kvm)
vgic_debug_init(kvm);
dist->initialized = true;
+
+ /*
+ * If we're initializing GICv2 on-demand when first running the VCPU
+ * then we need to load the VGIC state onto the CPU. We can detect
+ * this easily by checking if we are in between vcpu_load and vcpu_put
+ * when we just initialized the VGIC.
+ */
+ preempt_disable();
+ vcpu = kvm_arm_get_running_vcpu();
+ if (vcpu)
+ kvm_vgic_load(vcpu);
+ preempt_enable();
out:
return ret;
}
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index b834ecd..2f241e0 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -184,6 +184,7 @@ void vgic_v2_clear_lr(struct kvm_vcpu *vcpu, int lr)
void vgic_v2_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp)
{
+ struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
u32 vmcr;
vmcr = (vmcrp->ctlr << GICH_VMCR_CTRL_SHIFT) & GICH_VMCR_CTRL_MASK;
@@ -194,12 +195,15 @@ void vgic_v2_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp)
vmcr |= (vmcrp->pmr << GICH_VMCR_PRIMASK_SHIFT) &
GICH_VMCR_PRIMASK_MASK;
- vcpu->arch.vgic_cpu.vgic_v2.vgic_vmcr = vmcr;
+ cpu_if->vgic_vmcr = vmcr;
}
void vgic_v2_get_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp)
{
- u32 vmcr = vcpu->arch.vgic_cpu.vgic_v2.vgic_vmcr;
+ struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
+ u32 vmcr;
+
+ vmcr = cpu_if->vgic_vmcr;
vmcrp->ctlr = (vmcr & GICH_VMCR_CTRL_MASK) >>
GICH_VMCR_CTRL_SHIFT;
@@ -375,3 +379,19 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
return ret;
}
+
+void vgic_v2_load(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
+ struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
+
+ writel_relaxed(cpu_if->vgic_vmcr, vgic->vctrl_base + GICH_VMCR);
+}
+
+void vgic_v2_put(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
+ struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
+
+ cpu_if->vgic_vmcr = readl_relaxed(vgic->vctrl_base + GICH_VMCR);
+}
diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index be0f4c3..99213d7 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -173,6 +173,7 @@ void vgic_v3_clear_lr(struct kvm_vcpu *vcpu, int lr)
void vgic_v3_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp)
{
+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
u32 vmcr;
/*
@@ -188,12 +189,15 @@ void vgic_v3_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp)
vmcr |= (vmcrp->grpen0 << ICH_VMCR_ENG0_SHIFT) & ICH_VMCR_ENG0_MASK;
vmcr |= (vmcrp->grpen1 << ICH_VMCR_ENG1_SHIFT) & ICH_VMCR_ENG1_MASK;
- vcpu->arch.vgic_cpu.vgic_v3.vgic_vmcr = vmcr;
+ cpu_if->vgic_vmcr = vmcr;
}
void vgic_v3_get_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp)
{
- u32 vmcr = vcpu->arch.vgic_cpu.vgic_v3.vgic_vmcr;
+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ u32 vmcr;
+
+ vmcr = cpu_if->vgic_vmcr;
/*
* Ignore the FIQen bit, because GIC emulation always implies
@@ -386,3 +390,17 @@ int vgic_v3_probe(const struct gic_kvm_info *info)
return 0;
}
+
+void vgic_v3_load(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+
+ kvm_call_hyp(__vgic_v3_write_vmcr, cpu_if->vgic_vmcr);
+}
+
+void vgic_v3_put(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+
+ cpu_if->vgic_vmcr = kvm_call_hyp(__vgic_v3_read_vmcr);
+}
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 654dfd4..2ac0def 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -656,6 +656,28 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);
}
+void kvm_vgic_load(struct kvm_vcpu *vcpu)
+{
+ if (unlikely(!vgic_initialized(vcpu->kvm)))
+ return;
+
+ if (kvm_vgic_global_state.type == VGIC_V2)
+ vgic_v2_load(vcpu);
+ else
+ vgic_v3_load(vcpu);
+}
+
+void kvm_vgic_put(struct kvm_vcpu *vcpu)
+{
+ if (unlikely(!vgic_initialized(vcpu->kvm)))
+ return;
+
+ if (kvm_vgic_global_state.type == VGIC_V2)
+ vgic_v2_put(vcpu);
+ else
+ vgic_v3_put(vcpu);
+}
+
int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
{
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h
index db28f7c..9afb455 100644
--- a/virt/kvm/arm/vgic/vgic.h
+++ b/virt/kvm/arm/vgic/vgic.h
@@ -130,6 +130,9 @@ int vgic_v2_map_resources(struct kvm *kvm);
int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address,
enum vgic_type);
+void vgic_v2_load(struct kvm_vcpu *vcpu);
+void vgic_v2_put(struct kvm_vcpu *vcpu);
+
static inline void vgic_get_irq_kref(struct vgic_irq *irq)
{
if (irq->intid < VGIC_MIN_LPI)
@@ -150,6 +153,9 @@ int vgic_v3_probe(const struct gic_kvm_info *info);
int vgic_v3_map_resources(struct kvm *kvm);
int vgic_register_redist_iodevs(struct kvm *kvm, gpa_t dist_base_address);
+void vgic_v3_load(struct kvm_vcpu *vcpu);
+void vgic_v3_put(struct kvm_vcpu *vcpu);
+
int vgic_register_its_iodevs(struct kvm *kvm);
bool vgic_has_its(struct kvm *kvm);
int kvm_vgic_register_its_device(void);
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 18/79] KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no pending IRQ
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (16 preceding siblings ...)
2017-04-23 17:08 ` [PULL 17/79] KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 19/79] KVM: arm/arm64: vgic: Get rid of live_lrs Christoffer Dall
` (61 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Shih-Wei Li <shihwei@cs.columbia.edu>
We do not need to flush vgic states in each world switch unless
there is pending IRQ queued to the vgic's ap list. We can thus reduce
the overhead by not grabbing the spinlock and not making the extra
function call to vgic_flush_lr_state.
Note: list_empty is a single atomic read (uses READ_ONCE) and can
therefore check if a list is empty or not without the need to take the
spinlock protecting the list.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shih-Wei Li <shihwei@cs.columbia.edu>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
virt/kvm/arm/vgic/vgic.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 2ac0def..1043291 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -637,12 +637,17 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
/* Sync back the hardware VGIC state into our emulation after a guest's run. */
void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+
if (unlikely(!vgic_initialized(vcpu->kvm)))
return;
vgic_process_maintenance_interrupt(vcpu);
vgic_fold_lr_state(vcpu);
vgic_prune_ap_list(vcpu);
+
+ /* Make sure we can fast-path in flush_hwstate */
+ vgic_cpu->used_lrs = 0;
}
/* Flush our emulation state into the GIC hardware before entering the guest. */
@@ -651,6 +656,18 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
if (unlikely(!vgic_initialized(vcpu->kvm)))
return;
+ /*
+ * If there are no virtual interrupts active or pending for this
+ * VCPU, then there is no work to do and we can bail out without
+ * taking any lock. There is a potential race with someone injecting
+ * interrupts to the VCPU, but it is a benign race as the VCPU will
+ * either observe the new interrupt before or after doing this check,
+ * and introducing additional synchronization mechanism doesn't change
+ * this.
+ */
+ if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head))
+ return;
+
spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock);
vgic_flush_lr_state(vcpu);
spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 19/79] KVM: arm/arm64: vgic: Get rid of live_lrs
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (17 preceding siblings ...)
2017-04-23 17:08 ` [PULL 18/79] KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no pending IRQ Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 20/79] KVM: arm/arm64: vgic: Only set underflow when actually out of LRs Christoffer Dall
` (60 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Christoffer Dall <christoffer.dall@linaro.org>
There is no need to calculate and maintain live_lrs when we always
populate the lowest numbered LRs first on every entry and clear all LRs
on every exit.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
include/kvm/arm_vgic.h | 2 --
virt/kvm/arm/hyp/vgic-v2-sr.c | 39 ++++++++++-----------------------------
virt/kvm/arm/hyp/vgic-v3-sr.c | 42 ++++++++++++------------------------------
3 files changed, 22 insertions(+), 61 deletions(-)
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index f7a2e31..ea940db 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -264,8 +264,6 @@ struct vgic_cpu {
*/
struct list_head ap_list_head;
- u64 live_lrs;
-
/*
* Members below are used with GICv3 emulation only and represent
* parts of the redistributor.
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index d3d3b9b..34b37ce 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -26,27 +26,23 @@ static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu,
void __iomem *base)
{
struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
- int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr;
+ u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
u32 eisr0, eisr1;
int i;
bool expect_mi;
expect_mi = !!(cpu_if->vgic_hcr & GICH_HCR_UIE);
- for (i = 0; i < nr_lr; i++) {
- if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i)))
- continue;
-
+ for (i = 0; i < used_lrs && !expect_mi; i++)
expect_mi |= (!(cpu_if->vgic_lr[i] & GICH_LR_HW) &&
(cpu_if->vgic_lr[i] & GICH_LR_EOI));
- }
if (expect_mi) {
cpu_if->vgic_misr = readl_relaxed(base + GICH_MISR);
if (cpu_if->vgic_misr & GICH_MISR_EOI) {
eisr0 = readl_relaxed(base + GICH_EISR0);
- if (unlikely(nr_lr > 32))
+ if (unlikely(used_lrs > 32))
eisr1 = readl_relaxed(base + GICH_EISR1);
else
eisr1 = 0;
@@ -87,13 +83,10 @@ static void __hyp_text save_elrsr(struct kvm_vcpu *vcpu, void __iomem *base)
static void __hyp_text save_lrs(struct kvm_vcpu *vcpu, void __iomem *base)
{
struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
- int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr;
int i;
+ u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
- for (i = 0; i < nr_lr; i++) {
- if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i)))
- continue;
-
+ for (i = 0; i < used_lrs; i++) {
if (cpu_if->vgic_elrsr & (1UL << i))
cpu_if->vgic_lr[i] &= ~GICH_LR_STATE;
else
@@ -110,11 +103,12 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
struct vgic_dist *vgic = &kvm->arch.vgic;
void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+ u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
if (!base)
return;
- if (vcpu->arch.vgic_cpu.live_lrs) {
+ if (used_lrs) {
cpu_if->vgic_apr = readl_relaxed(base + GICH_APR);
save_maint_int_state(vcpu, base);
@@ -122,8 +116,6 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
save_lrs(vcpu, base);
writel_relaxed(0, base + GICH_HCR);
-
- vcpu->arch.vgic_cpu.live_lrs = 0;
} else {
cpu_if->vgic_eisr = 0;
cpu_if->vgic_elrsr = ~0UL;
@@ -139,31 +131,20 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
struct vgic_dist *vgic = &kvm->arch.vgic;
void __iomem *base = kern_hyp_va(vgic->vctrl_base);
- int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr;
int i;
- u64 live_lrs = 0;
+ u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
if (!base)
return;
-
- for (i = 0; i < nr_lr; i++)
- if (cpu_if->vgic_lr[i] & GICH_LR_STATE)
- live_lrs |= 1UL << i;
-
- if (live_lrs) {
+ if (used_lrs) {
writel_relaxed(cpu_if->vgic_hcr, base + GICH_HCR);
writel_relaxed(cpu_if->vgic_apr, base + GICH_APR);
- for (i = 0; i < nr_lr; i++) {
- if (!(live_lrs & (1UL << i)))
- continue;
-
+ for (i = 0; i < used_lrs; i++) {
writel_relaxed(cpu_if->vgic_lr[i],
base + GICH_LR0 + (i * 4));
}
}
-
- vcpu->arch.vgic_cpu.live_lrs = live_lrs;
}
#ifdef CONFIG_ARM64
diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c
index e51ee7e..b3c36b6 100644
--- a/virt/kvm/arm/hyp/vgic-v3-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v3-sr.c
@@ -118,18 +118,16 @@ static void __hyp_text __gic_v3_set_lr(u64 val, int lr)
}
}
-static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu, int nr_lr)
+static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
int i;
bool expect_mi;
+ u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
expect_mi = !!(cpu_if->vgic_hcr & ICH_HCR_UIE);
- for (i = 0; i < nr_lr; i++) {
- if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i)))
- continue;
-
+ for (i = 0; i < used_lrs; i++) {
expect_mi |= (!(cpu_if->vgic_lr[i] & ICH_LR_HW) &&
(cpu_if->vgic_lr[i] & ICH_LR_EOI));
}
@@ -150,6 +148,7 @@ static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu, int nr_lr)
void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
u64 val;
/*
@@ -159,23 +158,19 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
if (!cpu_if->vgic_sre)
dsb(st);
- if (vcpu->arch.vgic_cpu.live_lrs) {
+ if (used_lrs) {
int i;
- u32 max_lr_idx, nr_pri_bits;
+ u32 nr_pri_bits;
cpu_if->vgic_elrsr = read_gicreg(ICH_ELSR_EL2);
write_gicreg(0, ICH_HCR_EL2);
val = read_gicreg(ICH_VTR_EL2);
- max_lr_idx = vtr_to_max_lr_idx(val);
nr_pri_bits = vtr_to_nr_pri_bits(val);
- save_maint_int_state(vcpu, max_lr_idx + 1);
-
- for (i = 0; i <= max_lr_idx; i++) {
- if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i)))
- continue;
+ save_maint_int_state(vcpu);
+ for (i = 0; i <= used_lrs; i++) {
if (cpu_if->vgic_elrsr & (1 << i))
cpu_if->vgic_lr[i] &= ~ICH_LR_STATE;
else
@@ -203,8 +198,6 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
default:
cpu_if->vgic_ap1r[0] = read_gicreg(ICH_AP1R0_EL2);
}
-
- vcpu->arch.vgic_cpu.live_lrs = 0;
} else {
cpu_if->vgic_misr = 0;
cpu_if->vgic_eisr = 0;
@@ -232,9 +225,9 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
u64 val;
- u32 max_lr_idx, nr_pri_bits;
- u16 live_lrs = 0;
+ u32 nr_pri_bits;
int i;
/*
@@ -251,15 +244,9 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
}
val = read_gicreg(ICH_VTR_EL2);
- max_lr_idx = vtr_to_max_lr_idx(val);
nr_pri_bits = vtr_to_nr_pri_bits(val);
- for (i = 0; i <= max_lr_idx; i++) {
- if (cpu_if->vgic_lr[i] & ICH_LR_STATE)
- live_lrs |= (1 << i);
- }
-
- if (live_lrs) {
+ if (used_lrs) {
write_gicreg(cpu_if->vgic_hcr, ICH_HCR_EL2);
switch (nr_pri_bits) {
@@ -282,12 +269,8 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
write_gicreg(cpu_if->vgic_ap1r[0], ICH_AP1R0_EL2);
}
- for (i = 0; i <= max_lr_idx; i++) {
- if (!(live_lrs & (1 << i)))
- continue;
-
+ for (i = 0; i < used_lrs; i++)
__gic_v3_set_lr(cpu_if->vgic_lr[i], i);
- }
}
/*
@@ -299,7 +282,6 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
isb();
dsb(sy);
}
- vcpu->arch.vgic_cpu.live_lrs = live_lrs;
/*
* Prevent the guest from touching the GIC system registers if
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 20/79] KVM: arm/arm64: vgic: Only set underflow when actually out of LRs
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (18 preceding siblings ...)
2017-04-23 17:08 ` [PULL 19/79] KVM: arm/arm64: vgic: Get rid of live_lrs Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 21/79] KVM: arm/arm64: vgic: Get rid of unnecessary process_maintenance operation Christoffer Dall
` (59 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
We currently assume that all the interrupts in our AP list will be
queued to LRs, but that's not necessarily the case, because some of them
could have been migrated away to different VCPUs and only the VCPU
thread itself can remove interrupts from its AP list.
Therefore, slightly change the logic to only setting the underflow
interrupt when we actually run out of LRs.
As it turns out, this allows us to further simplify the handling in
vgic_sync_hwstate in later patches.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
virt/kvm/arm/vgic/vgic.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 1043291..442f7df 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -601,10 +601,8 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
- if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr) {
- vgic_set_underflow(vcpu);
+ if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr)
vgic_sort_ap_list(vcpu);
- }
list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
spin_lock(&irq->irq_lock);
@@ -623,8 +621,12 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
next:
spin_unlock(&irq->irq_lock);
- if (count == kvm_vgic_global_state.nr_lr)
+ if (count == kvm_vgic_global_state.nr_lr) {
+ if (!list_is_last(&irq->ap_list,
+ &vgic_cpu->ap_list_head))
+ vgic_set_underflow(vcpu);
break;
+ }
}
vcpu->arch.vgic_cpu.used_lrs = count;
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 21/79] KVM: arm/arm64: vgic: Get rid of unnecessary process_maintenance operation
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (19 preceding siblings ...)
2017-04-23 17:08 ` [PULL 20/79] KVM: arm/arm64: vgic: Only set underflow when actually out of LRs Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 22/79] KVM: arm/arm64: vgic: Get rid of unnecessary save_maint_int_state Christoffer Dall
` (58 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Christoffer Dall <christoffer.dall@linaro.org>
Since we always read back the LRs that we wrote to the guest and the
MISR and EISR registers simply provide a summary of the configuration of
the bits in the LRs, there is really no need to read back those status
registers and process them. We might as well just signal the
notifyfd when folding the LR state and save some cycles in the process.
We now clear the underflow bit in the fold_lr_state functions as we only
need to clear this bit if we had used all the LRs, so this is as good a
place as any to do that work.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
virt/kvm/arm/vgic/vgic-v2.c | 59 +++++++++------------------------------------
virt/kvm/arm/vgic/vgic-v3.c | 51 ++++++++++-----------------------------
virt/kvm/arm/vgic/vgic.c | 9 -------
virt/kvm/arm/vgic/vgic.h | 2 --
4 files changed, 25 insertions(+), 96 deletions(-)
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 2f241e0..b58b086 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -22,59 +22,17 @@
#include "vgic.h"
-/*
- * Call this function to convert a u64 value to an unsigned long * bitmask
- * in a way that works on both 32-bit and 64-bit LE and BE platforms.
- *
- * Warning: Calling this function may modify *val.
- */
-static unsigned long *u64_to_bitmask(u64 *val)
-{
-#if defined(CONFIG_CPU_BIG_ENDIAN) && BITS_PER_LONG == 32
- *val = (*val >> 32) | (*val << 32);
-#endif
- return (unsigned long *)val;
-}
-
-void vgic_v2_process_maintenance(struct kvm_vcpu *vcpu)
+void vgic_v2_set_underflow(struct kvm_vcpu *vcpu)
{
struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2;
- if (cpuif->vgic_misr & GICH_MISR_EOI) {
- u64 eisr = cpuif->vgic_eisr;
- unsigned long *eisr_bmap = u64_to_bitmask(&eisr);
- int lr;
-
- for_each_set_bit(lr, eisr_bmap, kvm_vgic_global_state.nr_lr) {
- u32 intid = cpuif->vgic_lr[lr] & GICH_LR_VIRTUALID;
-
- WARN_ON(cpuif->vgic_lr[lr] & GICH_LR_STATE);
-
- /* Only SPIs require notification */
- if (vgic_valid_spi(vcpu->kvm, intid))
- kvm_notify_acked_irq(vcpu->kvm, 0,
- intid - VGIC_NR_PRIVATE_IRQS);
- }
- }
-
- /* check and disable underflow maintenance IRQ */
- cpuif->vgic_hcr &= ~GICH_HCR_UIE;
-
- /*
- * In the next iterations of the vcpu loop, if we sync the
- * vgic state after flushing it, but before entering the guest
- * (this happens for pending signals and vmid rollovers), then
- * make sure we don't pick up any old maintenance interrupts
- * here.
- */
- cpuif->vgic_eisr = 0;
+ cpuif->vgic_hcr |= GICH_HCR_UIE;
}
-void vgic_v2_set_underflow(struct kvm_vcpu *vcpu)
+static bool lr_signals_eoi_mi(u32 lr_val)
{
- struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2;
-
- cpuif->vgic_hcr |= GICH_HCR_UIE;
+ return !(lr_val & GICH_LR_STATE) && (lr_val & GICH_LR_EOI) &&
+ !(lr_val & GICH_LR_HW);
}
/*
@@ -89,11 +47,18 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2;
int lr;
+ cpuif->vgic_hcr &= ~GICH_HCR_UIE;
+
for (lr = 0; lr < vcpu->arch.vgic_cpu.used_lrs; lr++) {
u32 val = cpuif->vgic_lr[lr];
u32 intid = val & GICH_LR_VIRTUALID;
struct vgic_irq *irq;
+ /* Notify fds when the guest EOI'ed a level-triggered SPI */
+ if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid))
+ kvm_notify_acked_irq(vcpu->kvm, 0,
+ intid - VGIC_NR_PRIVATE_IRQS);
+
irq = vgic_get_irq(vcpu->kvm, vcpu, intid);
spin_lock(&irq->irq_lock);
diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index 99213d7..4f2dce6 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -21,50 +21,17 @@
#include "vgic.h"
-void vgic_v3_process_maintenance(struct kvm_vcpu *vcpu)
+void vgic_v3_set_underflow(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3;
- u32 model = vcpu->kvm->arch.vgic.vgic_model;
-
- if (cpuif->vgic_misr & ICH_MISR_EOI) {
- unsigned long eisr_bmap = cpuif->vgic_eisr;
- int lr;
-
- for_each_set_bit(lr, &eisr_bmap, kvm_vgic_global_state.nr_lr) {
- u32 intid;
- u64 val = cpuif->vgic_lr[lr];
-
- if (model == KVM_DEV_TYPE_ARM_VGIC_V3)
- intid = val & ICH_LR_VIRTUAL_ID_MASK;
- else
- intid = val & GICH_LR_VIRTUALID;
-
- WARN_ON(cpuif->vgic_lr[lr] & ICH_LR_STATE);
-
- /* Only SPIs require notification */
- if (vgic_valid_spi(vcpu->kvm, intid))
- kvm_notify_acked_irq(vcpu->kvm, 0,
- intid - VGIC_NR_PRIVATE_IRQS);
- }
-
- /*
- * In the next iterations of the vcpu loop, if we sync
- * the vgic state after flushing it, but before
- * entering the guest (this happens for pending
- * signals and vmid rollovers), then make sure we
- * don't pick up any old maintenance interrupts here.
- */
- cpuif->vgic_eisr = 0;
- }
- cpuif->vgic_hcr &= ~ICH_HCR_UIE;
+ cpuif->vgic_hcr |= ICH_HCR_UIE;
}
-void vgic_v3_set_underflow(struct kvm_vcpu *vcpu)
+static bool lr_signals_eoi_mi(u64 lr_val)
{
- struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3;
-
- cpuif->vgic_hcr |= ICH_HCR_UIE;
+ return !(lr_val & ICH_LR_STATE) && (lr_val & ICH_LR_EOI) &&
+ !(lr_val & ICH_LR_HW);
}
void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
@@ -73,6 +40,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
u32 model = vcpu->kvm->arch.vgic.vgic_model;
int lr;
+ cpuif->vgic_hcr &= ~ICH_HCR_UIE;
+
for (lr = 0; lr < vcpu->arch.vgic_cpu.used_lrs; lr++) {
u64 val = cpuif->vgic_lr[lr];
u32 intid;
@@ -82,6 +51,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
intid = val & ICH_LR_VIRTUAL_ID_MASK;
else
intid = val & GICH_LR_VIRTUALID;
+
+ /* Notify fds when the guest EOI'ed a level-triggered IRQ */
+ if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid))
+ kvm_notify_acked_irq(vcpu->kvm, 0,
+ intid - VGIC_NR_PRIVATE_IRQS);
+
irq = vgic_get_irq(vcpu->kvm, vcpu, intid);
if (!irq) /* An LPI could have been unmapped. */
continue;
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 442f7df..b64b143 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -527,14 +527,6 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu)
spin_unlock(&vgic_cpu->ap_list_lock);
}
-static inline void vgic_process_maintenance_interrupt(struct kvm_vcpu *vcpu)
-{
- if (kvm_vgic_global_state.type == VGIC_V2)
- vgic_v2_process_maintenance(vcpu);
- else
- vgic_v3_process_maintenance(vcpu);
-}
-
static inline void vgic_fold_lr_state(struct kvm_vcpu *vcpu)
{
if (kvm_vgic_global_state.type == VGIC_V2)
@@ -644,7 +636,6 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
if (unlikely(!vgic_initialized(vcpu->kvm)))
return;
- vgic_process_maintenance_interrupt(vcpu);
vgic_fold_lr_state(vcpu);
vgic_prune_ap_list(vcpu);
diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h
index 9afb455..44445da 100644
--- a/virt/kvm/arm/vgic/vgic.h
+++ b/virt/kvm/arm/vgic/vgic.h
@@ -112,7 +112,6 @@ void vgic_kick_vcpus(struct kvm *kvm);
int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr,
phys_addr_t addr, phys_addr_t alignment);
-void vgic_v2_process_maintenance(struct kvm_vcpu *vcpu);
void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu);
void vgic_v2_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr);
void vgic_v2_clear_lr(struct kvm_vcpu *vcpu, int lr);
@@ -141,7 +140,6 @@ static inline void vgic_get_irq_kref(struct vgic_irq *irq)
kref_get(&irq->refcount);
}
-void vgic_v3_process_maintenance(struct kvm_vcpu *vcpu);
void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu);
void vgic_v3_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr);
void vgic_v3_clear_lr(struct kvm_vcpu *vcpu, int lr);
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 22/79] KVM: arm/arm64: vgic: Get rid of unnecessary save_maint_int_state
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (20 preceding siblings ...)
2017-04-23 17:08 ` [PULL 21/79] KVM: arm/arm64: vgic: Get rid of unnecessary process_maintenance operation Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 23/79] KVM: arm/arm64: vgic: Get rid of MISR and EISR fields Christoffer Dall
` (57 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Christoffer Dall <christoffer.dall@linaro.org>
Now when we don't look at the MISR and EISR values anymore, we can get
rid of the logic to save them in the GIC save/restore code.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
virt/kvm/arm/hyp/vgic-v2-sr.c | 40 ----------------------------------------
virt/kvm/arm/hyp/vgic-v3-sr.c | 29 -----------------------------
2 files changed, 69 deletions(-)
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 34b37ce..a4c3bb0 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -22,45 +22,6 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
-static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu,
- void __iomem *base)
-{
- struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
- u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
- u32 eisr0, eisr1;
- int i;
- bool expect_mi;
-
- expect_mi = !!(cpu_if->vgic_hcr & GICH_HCR_UIE);
-
- for (i = 0; i < used_lrs && !expect_mi; i++)
- expect_mi |= (!(cpu_if->vgic_lr[i] & GICH_LR_HW) &&
- (cpu_if->vgic_lr[i] & GICH_LR_EOI));
-
- if (expect_mi) {
- cpu_if->vgic_misr = readl_relaxed(base + GICH_MISR);
-
- if (cpu_if->vgic_misr & GICH_MISR_EOI) {
- eisr0 = readl_relaxed(base + GICH_EISR0);
- if (unlikely(used_lrs > 32))
- eisr1 = readl_relaxed(base + GICH_EISR1);
- else
- eisr1 = 0;
- } else {
- eisr0 = eisr1 = 0;
- }
- } else {
- cpu_if->vgic_misr = 0;
- eisr0 = eisr1 = 0;
- }
-
-#ifdef CONFIG_CPU_BIG_ENDIAN
- cpu_if->vgic_eisr = ((u64)eisr0 << 32) | eisr1;
-#else
- cpu_if->vgic_eisr = ((u64)eisr1 << 32) | eisr0;
-#endif
-}
-
static void __hyp_text save_elrsr(struct kvm_vcpu *vcpu, void __iomem *base)
{
struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
@@ -111,7 +72,6 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
if (used_lrs) {
cpu_if->vgic_apr = readl_relaxed(base + GICH_APR);
- save_maint_int_state(vcpu, base);
save_elrsr(vcpu, base);
save_lrs(vcpu, base);
diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c
index b3c36b6..41bbbb0 100644
--- a/virt/kvm/arm/hyp/vgic-v3-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v3-sr.c
@@ -118,33 +118,6 @@ static void __hyp_text __gic_v3_set_lr(u64 val, int lr)
}
}
-static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu)
-{
- struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
- int i;
- bool expect_mi;
- u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
-
- expect_mi = !!(cpu_if->vgic_hcr & ICH_HCR_UIE);
-
- for (i = 0; i < used_lrs; i++) {
- expect_mi |= (!(cpu_if->vgic_lr[i] & ICH_LR_HW) &&
- (cpu_if->vgic_lr[i] & ICH_LR_EOI));
- }
-
- if (expect_mi) {
- cpu_if->vgic_misr = read_gicreg(ICH_MISR_EL2);
-
- if (cpu_if->vgic_misr & ICH_MISR_EOI)
- cpu_if->vgic_eisr = read_gicreg(ICH_EISR_EL2);
- else
- cpu_if->vgic_eisr = 0;
- } else {
- cpu_if->vgic_misr = 0;
- cpu_if->vgic_eisr = 0;
- }
-}
-
void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
@@ -168,8 +141,6 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
val = read_gicreg(ICH_VTR_EL2);
nr_pri_bits = vtr_to_nr_pri_bits(val);
- save_maint_int_state(vcpu);
-
for (i = 0; i <= used_lrs; i++) {
if (cpu_if->vgic_elrsr & (1 << i))
cpu_if->vgic_lr[i] &= ~ICH_LR_STATE;
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 23/79] KVM: arm/arm64: vgic: Get rid of MISR and EISR fields
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (21 preceding siblings ...)
2017-04-23 17:08 ` [PULL 22/79] KVM: arm/arm64: vgic: Get rid of unnecessary save_maint_int_state Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 24/79] KVM: arm/arm64: vgic: Implement early VGIC init functionality Christoffer Dall
` (56 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Christoffer Dall <christoffer.dall@linaro.org>
We don't use these fields anymore so let's nuke them completely.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
include/kvm/arm_vgic.h | 4 ----
virt/kvm/arm/hyp/vgic-v2-sr.c | 2 --
virt/kvm/arm/hyp/vgic-v3-sr.c | 2 --
3 files changed, 8 deletions(-)
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index ea940db..26ed4fb 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -225,8 +225,6 @@ struct vgic_dist {
struct vgic_v2_cpu_if {
u32 vgic_hcr;
u32 vgic_vmcr;
- u32 vgic_misr; /* Saved only */
- u64 vgic_eisr; /* Saved only */
u64 vgic_elrsr; /* Saved only */
u32 vgic_apr;
u32 vgic_lr[VGIC_V2_MAX_LRS];
@@ -236,8 +234,6 @@ struct vgic_v3_cpu_if {
u32 vgic_hcr;
u32 vgic_vmcr;
u32 vgic_sre; /* Restored only, change ignored */
- u32 vgic_misr; /* Saved only */
- u32 vgic_eisr; /* Saved only */
u32 vgic_elrsr; /* Saved only */
u32 vgic_ap0r[4];
u32 vgic_ap1r[4];
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index a4c3bb0..a3f18d3 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -77,9 +77,7 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
writel_relaxed(0, base + GICH_HCR);
} else {
- cpu_if->vgic_eisr = 0;
cpu_if->vgic_elrsr = ~0UL;
- cpu_if->vgic_misr = 0;
cpu_if->vgic_apr = 0;
}
}
diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c
index 41bbbb0..3d0b1dd 100644
--- a/virt/kvm/arm/hyp/vgic-v3-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v3-sr.c
@@ -170,8 +170,6 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
cpu_if->vgic_ap1r[0] = read_gicreg(ICH_AP1R0_EL2);
}
} else {
- cpu_if->vgic_misr = 0;
- cpu_if->vgic_eisr = 0;
cpu_if->vgic_elrsr = 0xffff;
cpu_if->vgic_ap0r[0] = 0;
cpu_if->vgic_ap0r[1] = 0;
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 24/79] KVM: arm/arm64: vgic: Implement early VGIC init functionality
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (22 preceding siblings ...)
2017-04-23 17:08 ` [PULL 23/79] KVM: arm/arm64: vgic: Get rid of MISR and EISR fields Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 25/79] KVM: arm/arm64: vgic: Don't check vgic_initialized in sync/flush Christoffer Dall
` (55 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
Implement early initialization for both the distributor and the CPU
interfaces. The basic idea is that even though the VGIC is not
functional or not requested from user space, the critical path of the
run loop can still call VGIC functions that just won't do anything,
without them having to check additional initialization flags to ensure
they don't look at uninitialized data structures.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
virt/kvm/arm/vgic/vgic-init.c | 96 +++++++++++++++++++++++++------------------
1 file changed, 56 insertions(+), 40 deletions(-)
diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
index e8e973b..87de048 100644
--- a/virt/kvm/arm/vgic/vgic-init.c
+++ b/virt/kvm/arm/vgic/vgic-init.c
@@ -24,7 +24,12 @@
/*
* Initialization rules: there are multiple stages to the vgic
- * initialization, both for the distributor and the CPU interfaces.
+ * initialization, both for the distributor and the CPU interfaces. The basic
+ * idea is that even though the VGIC is not functional or not requested from
+ * user space, the critical path of the run loop can still call VGIC functions
+ * that just won't do anything, without them having to check additional
+ * initialization flags to ensure they don't look at uninitialized data
+ * structures.
*
* Distributor:
*
@@ -39,23 +44,67 @@
*
* CPU Interface:
*
- * - kvm_vgic_cpu_early_init(): initialization of static data that
+ * - kvm_vgic_vcpu_early_init(): initialization of static data that
* doesn't depend on any sizing information or emulation type. No
* allocation is allowed there.
*/
/* EARLY INIT */
-/*
- * Those 2 functions should not be needed anymore but they
- * still are called from arm.c
+/**
+ * kvm_vgic_early_init() - Initialize static VGIC VCPU data structures
+ * @kvm: The VM whose VGIC districutor should be initialized
+ *
+ * Only do initialization of static structures that don't require any
+ * allocation or sizing information from userspace. vgic_init() called
+ * kvm_vgic_dist_init() which takes care of the rest.
*/
void kvm_vgic_early_init(struct kvm *kvm)
{
+ struct vgic_dist *dist = &kvm->arch.vgic;
+
+ INIT_LIST_HEAD(&dist->lpi_list_head);
+ spin_lock_init(&dist->lpi_list_lock);
}
+/**
+ * kvm_vgic_vcpu_early_init() - Initialize static VGIC VCPU data structures
+ * @vcpu: The VCPU whose VGIC data structures whould be initialized
+ *
+ * Only do initialization, but do not actually enable the VGIC CPU interface
+ * yet.
+ */
void kvm_vgic_vcpu_early_init(struct kvm_vcpu *vcpu)
{
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ int i;
+
+ INIT_LIST_HEAD(&vgic_cpu->ap_list_head);
+ spin_lock_init(&vgic_cpu->ap_list_lock);
+
+ /*
+ * Enable and configure all SGIs to be edge-triggered and
+ * configure all PPIs as level-triggered.
+ */
+ for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++) {
+ struct vgic_irq *irq = &vgic_cpu->private_irqs[i];
+
+ INIT_LIST_HEAD(&irq->ap_list);
+ spin_lock_init(&irq->irq_lock);
+ irq->intid = i;
+ irq->vcpu = NULL;
+ irq->target_vcpu = vcpu;
+ irq->targets = 1U << vcpu->vcpu_id;
+ kref_init(&irq->refcount);
+ if (vgic_irq_is_sgi(i)) {
+ /* SGIs */
+ irq->enabled = 1;
+ irq->config = VGIC_CONFIG_EDGE;
+ } else {
+ /* PPIs */
+ irq->config = VGIC_CONFIG_LEVEL;
+ }
+ }
}
/* CREATION */
@@ -148,9 +197,6 @@ static int kvm_vgic_dist_init(struct kvm *kvm, unsigned int nr_spis)
struct kvm_vcpu *vcpu0 = kvm_get_vcpu(kvm, 0);
int i;
- INIT_LIST_HEAD(&dist->lpi_list_head);
- spin_lock_init(&dist->lpi_list_lock);
-
dist->spis = kcalloc(nr_spis, sizeof(struct vgic_irq), GFP_KERNEL);
if (!dist->spis)
return -ENOMEM;
@@ -181,41 +227,11 @@ static int kvm_vgic_dist_init(struct kvm *kvm, unsigned int nr_spis)
}
/**
- * kvm_vgic_vcpu_init: initialize the vcpu data structures and
- * enable the VCPU interface
- * @vcpu: the VCPU which's VGIC should be initialized
+ * kvm_vgic_vcpu_init() - Enable the VCPU interface
+ * @vcpu: the VCPU which's VGIC should be enabled
*/
static void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
{
- struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
- int i;
-
- INIT_LIST_HEAD(&vgic_cpu->ap_list_head);
- spin_lock_init(&vgic_cpu->ap_list_lock);
-
- /*
- * Enable and configure all SGIs to be edge-triggered and
- * configure all PPIs as level-triggered.
- */
- for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++) {
- struct vgic_irq *irq = &vgic_cpu->private_irqs[i];
-
- INIT_LIST_HEAD(&irq->ap_list);
- spin_lock_init(&irq->irq_lock);
- irq->intid = i;
- irq->vcpu = NULL;
- irq->target_vcpu = vcpu;
- irq->targets = 1U << vcpu->vcpu_id;
- kref_init(&irq->refcount);
- if (vgic_irq_is_sgi(i)) {
- /* SGIs */
- irq->enabled = 1;
- irq->config = VGIC_CONFIG_EDGE;
- } else {
- /* PPIs */
- irq->config = VGIC_CONFIG_LEVEL;
- }
- }
if (kvm_vgic_global_state.type == VGIC_V2)
vgic_v2_enable(vcpu);
else
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 25/79] KVM: arm/arm64: vgic: Don't check vgic_initialized in sync/flush
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (23 preceding siblings ...)
2017-04-23 17:08 ` [PULL 24/79] KVM: arm/arm64: vgic: Implement early VGIC init functionality Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 26/79] KVM: arm/arm64: vgic: Improve sync_hwstate performance Christoffer Dall
` (54 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
Now when we do an early init of the static parts of the VGIC data
structures, we can do things like checking if the AP lists are empty
directly without having to explicitly check if the vgic is initialized
and reduce a bit of work in our critical path.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
virt/kvm/arm/vgic/vgic.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index b64b143..04a405a 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -633,9 +633,6 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
{
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
- if (unlikely(!vgic_initialized(vcpu->kvm)))
- return;
-
vgic_fold_lr_state(vcpu);
vgic_prune_ap_list(vcpu);
@@ -646,9 +643,6 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
/* Flush our emulation state into the GIC hardware before entering the guest. */
void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
{
- if (unlikely(!vgic_initialized(vcpu->kvm)))
- return;
-
/*
* If there are no virtual interrupts active or pending for this
* VCPU, then there is no work to do and we can bail out without
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 26/79] KVM: arm/arm64: vgic: Improve sync_hwstate performance
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (24 preceding siblings ...)
2017-04-23 17:08 ` [PULL 25/79] KVM: arm/arm64: vgic: Don't check vgic_initialized in sync/flush Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 27/79] arm64: KVM: PMU: Refactor pmu_*_el0_disabled Christoffer Dall
` (53 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
There is no need to call any functions to fold LRs when we don't use any
LRs and we don't need to mess with overflow flags, take spinlocks, or
prune the AP list if the AP list is empty.
Note: list_empty is a single atomic read (uses READ_ONCE) and can
therefore check if a list is empty or not without the need to take the
spinlock protecting the list.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
virt/kvm/arm/vgic/vgic-v2.c | 7 +++++--
virt/kvm/arm/vgic/vgic-v3.c | 7 +++++--
virt/kvm/arm/vgic/vgic.c | 10 ++++++----
3 files changed, 16 insertions(+), 8 deletions(-)
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index b58b086..025b57d 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -44,12 +44,13 @@ static bool lr_signals_eoi_mi(u32 lr_val)
*/
void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
{
- struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2;
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_v2_cpu_if *cpuif = &vgic_cpu->vgic_v2;
int lr;
cpuif->vgic_hcr &= ~GICH_HCR_UIE;
- for (lr = 0; lr < vcpu->arch.vgic_cpu.used_lrs; lr++) {
+ for (lr = 0; lr < vgic_cpu->used_lrs; lr++) {
u32 val = cpuif->vgic_lr[lr];
u32 intid = val & GICH_LR_VIRTUALID;
struct vgic_irq *irq;
@@ -91,6 +92,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
spin_unlock(&irq->irq_lock);
vgic_put_irq(vcpu->kvm, irq);
}
+
+ vgic_cpu->used_lrs = 0;
}
/*
diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index 4f2dce6..bc7010d 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -36,13 +36,14 @@ static bool lr_signals_eoi_mi(u64 lr_val)
void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
{
- struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3;
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3;
u32 model = vcpu->kvm->arch.vgic.vgic_model;
int lr;
cpuif->vgic_hcr &= ~ICH_HCR_UIE;
- for (lr = 0; lr < vcpu->arch.vgic_cpu.used_lrs; lr++) {
+ for (lr = 0; lr < vgic_cpu->used_lrs; lr++) {
u64 val = cpuif->vgic_lr[lr];
u32 intid;
struct vgic_irq *irq;
@@ -92,6 +93,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
spin_unlock(&irq->irq_lock);
vgic_put_irq(vcpu->kvm, irq);
}
+
+ vgic_cpu->used_lrs = 0;
}
/* Requires the irq to be locked already */
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 04a405a..3d0979c 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -633,11 +633,13 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
{
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
- vgic_fold_lr_state(vcpu);
- vgic_prune_ap_list(vcpu);
+ /* An empty ap_list_head implies used_lrs == 0 */
+ if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head))
+ return;
- /* Make sure we can fast-path in flush_hwstate */
- vgic_cpu->used_lrs = 0;
+ if (vgic_cpu->used_lrs)
+ vgic_fold_lr_state(vcpu);
+ vgic_prune_ap_list(vcpu);
}
/* Flush our emulation state into the GIC hardware before entering the guest. */
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 27/79] arm64: KVM: PMU: Refactor pmu_*_el0_disabled
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (25 preceding siblings ...)
2017-04-23 17:08 ` [PULL 26/79] KVM: arm/arm64: vgic: Improve sync_hwstate performance Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 28/79] arm64: KVM: PMU: Inject UNDEF exception on illegal register access Christoffer Dall
` (52 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
There is a lot of duplication in the pmu_*_el0_disabled helpers,
and as we're going to modify them shortly, let's move all the
common stuff in a single function.
No functional change.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 25 +++++++++++--------------
1 file changed, 11 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 0e26f8c..036efc97 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -460,35 +460,32 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu_sys_reg(vcpu, PMCR_EL0) = val;
}
-static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
+static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags)
{
u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+ bool enabled = (reg & flags) || vcpu_mode_priv(vcpu);
- return !((reg & ARMV8_PMU_USERENR_EN) || vcpu_mode_priv(vcpu));
+ return !enabled;
}
-static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
+static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
{
- u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+ return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN);
+}
- return !((reg & (ARMV8_PMU_USERENR_SW | ARMV8_PMU_USERENR_EN))
- || vcpu_mode_priv(vcpu));
+static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
+{
+ return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_SW | ARMV8_PMU_USERENR_EN);
}
static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
{
- u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
-
- return !((reg & (ARMV8_PMU_USERENR_CR | ARMV8_PMU_USERENR_EN))
- || vcpu_mode_priv(vcpu));
+ return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_CR | ARMV8_PMU_USERENR_EN);
}
static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
{
- u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
-
- return !((reg & (ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN))
- || vcpu_mode_priv(vcpu));
+ return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN);
}
static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 28/79] arm64: KVM: PMU: Inject UNDEF exception on illegal register access
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (26 preceding siblings ...)
2017-04-23 17:08 ` [PULL 27/79] arm64: KVM: PMU: Refactor pmu_*_el0_disabled Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 29/79] arm64: KVM: PMU: Inject UNDEF on non-privileged accesses Christoffer Dall
` (51 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Both pmu_*_el0_disabled() and pmu_counter_idx_valid() perform checks
on the validity of an access, but only return a boolean indicating
if the access is valid or not.
Let's allow these functions to also inject an UNDEF exception if
the access was illegal.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 036efc97..750c129 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -465,6 +465,9 @@ static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags)
u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
bool enabled = (reg & flags) || vcpu_mode_priv(vcpu);
+ if (!enabled)
+ kvm_inject_undefined(vcpu);
+
return !enabled;
}
@@ -564,8 +567,10 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
- if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX)
+ if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
+ kvm_inject_undefined(vcpu);
return false;
+ }
return true;
}
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 29/79] arm64: KVM: PMU: Inject UNDEF on non-privileged accesses
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (27 preceding siblings ...)
2017-04-23 17:08 ` [PULL 28/79] arm64: KVM: PMU: Inject UNDEF exception on illegal register access Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 30/79] arm64: KVM: Make unexpected reads from WO registers inject an undef Christoffer Dall
` (50 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
access_pminten() and access_pmuserenr() can only be accessed when
the CPU is in a priviledged mode. If it is not, let's inject an
UNDEF exception.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 750c129..d343c0f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -709,8 +709,10 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
if (!kvm_arm_pmu_v3_ready(vcpu))
return trap_raz_wi(vcpu, p, r);
- if (!vcpu_mode_priv(vcpu))
+ if (!vcpu_mode_priv(vcpu)) {
+ kvm_inject_undefined(vcpu);
return false;
+ }
if (p->is_write) {
u64 val = p->regval & mask;
@@ -780,8 +782,10 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
return trap_raz_wi(vcpu, p, r);
if (p->is_write) {
- if (!vcpu_mode_priv(vcpu))
+ if (!vcpu_mode_priv(vcpu)) {
+ kvm_inject_undefined(vcpu);
return false;
+ }
vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval
& ARMV8_PMU_USERENR_MASK;
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 30/79] arm64: KVM: Make unexpected reads from WO registers inject an undef
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (28 preceding siblings ...)
2017-04-23 17:08 ` [PULL 29/79] arm64: KVM: PMU: Inject UNDEF on non-privileged accesses Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 31/79] arm64: KVM: PMU: Inject UNDEF on read access to PMSWINC_EL0 Christoffer Dall
` (49 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Reads from write-only system registers are generally confined to
EL1 and not propagated to EL2 (that's what the architecture
mantates). In order to be sure that we have a sane behaviour
even in the unlikely event that we have a broken system, we still
handle it in KVM.
In that case, let's inject an undef into the guest.
Let's also remove write_to_read_only which isn't used anywhere.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs.c | 9 +++++++++
arch/arm64/kvm/sys_regs.h | 18 ------------------
2 files changed, 9 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d343c0f..20f90c0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -55,6 +55,15 @@
* 64bit interface.
*/
+static bool read_from_write_only(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ WARN_ONCE(1, "Unexpected sys_reg read to write-only register\n");
+ print_sys_reg_instr(params);
+ kvm_inject_undefined(vcpu);
+ return false;
+}
+
/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
static u32 cache_levels;
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9c6ffd0..638f724 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -83,24 +83,6 @@ static inline bool read_zero(struct kvm_vcpu *vcpu,
return true;
}
-static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
- const struct sys_reg_params *params)
-{
- kvm_debug("sys_reg write to read-only register at: %lx\n",
- *vcpu_pc(vcpu));
- print_sys_reg_instr(params);
- return false;
-}
-
-static inline bool read_from_write_only(struct kvm_vcpu *vcpu,
- const struct sys_reg_params *params)
-{
- kvm_debug("sys_reg read to write-only register at: %lx\n",
- *vcpu_pc(vcpu));
- print_sys_reg_instr(params);
- return false;
-}
-
/* Reset functions */
static inline void reset_unknown(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 31/79] arm64: KVM: PMU: Inject UNDEF on read access to PMSWINC_EL0
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (29 preceding siblings ...)
2017-04-23 17:08 ` [PULL 30/79] arm64: KVM: Make unexpected reads from WO registers inject an undef Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 32/79] arm64: KVM: Treat sysreg accessors returning false as successful Christoffer Dall
` (48 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
PMSWINC_EL0 is a WO register, so let's UNDEF when reading from it
(in the highly hypothetical case where this doesn't UNDEF at EL1).
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 20f90c0..3fef01d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -772,16 +772,15 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
if (!kvm_arm_pmu_v3_ready(vcpu))
return trap_raz_wi(vcpu, p, r);
+ if (!p->is_write)
+ return read_from_write_only(vcpu, p);
+
if (pmu_write_swinc_el0_disabled(vcpu))
return false;
- if (p->is_write) {
- mask = kvm_pmu_valid_counter_mask(vcpu);
- kvm_pmu_software_increment(vcpu, p->regval & mask);
- return true;
- }
-
- return false;
+ mask = kvm_pmu_valid_counter_mask(vcpu);
+ kvm_pmu_software_increment(vcpu, p->regval & mask);
+ return true;
}
static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 32/79] arm64: KVM: Treat sysreg accessors returning false as successful
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (30 preceding siblings ...)
2017-04-23 17:08 ` [PULL 31/79] arm64: KVM: PMU: Inject UNDEF on read access to PMSWINC_EL0 Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 33/79] arm64: KVM: Do not corrupt registers on failed 64bit CP read Christoffer Dall
` (47 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Instead of considering that a sysreg accessor has failed when
returning false, let's consider that it is *always* successful
(after all, we won't stand for an incomplete emulation).
The return value now simply indicates whether we should skip
the instruction (because it has now been emulated), or if we
should leave the PC alone if the emulation has injected an
exception.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++----------------------------
1 file changed, 20 insertions(+), 29 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3fef01d..2f4418e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1571,6 +1571,22 @@ int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
return 1;
}
+static void perform_access(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *params,
+ const struct sys_reg_desc *r)
+{
+ /*
+ * Not having an accessor means that we have configured a trap
+ * that we don't know how to handle. This certainly qualifies
+ * as a gross bug that should be fixed right away.
+ */
+ BUG_ON(!r->access);
+
+ /* Skip instruction if instructed so */
+ if (likely(r->access(vcpu, params, r)))
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+}
+
/*
* emulate_cp -- tries to match a sys_reg access in a handling table, and
* call the corresponding trap handler.
@@ -1594,20 +1610,8 @@ static int emulate_cp(struct kvm_vcpu *vcpu,
r = find_reg(params, table, num);
if (r) {
- /*
- * Not having an accessor means that we have
- * configured a trap that we don't know how to
- * handle. This certainly qualifies as a gross bug
- * that should be fixed right away.
- */
- BUG_ON(!r->access);
-
- if (likely(r->access(vcpu, params, r))) {
- /* Skip instruction, since it was emulated */
- kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
- /* Handled */
- return 0;
- }
+ perform_access(vcpu, params, r);
+ return 0;
}
/* Not handled */
@@ -1777,26 +1781,13 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
if (likely(r)) {
- /*
- * Not having an accessor means that we have
- * configured a trap that we don't know how to
- * handle. This certainly qualifies as a gross bug
- * that should be fixed right away.
- */
- BUG_ON(!r->access);
-
- if (likely(r->access(vcpu, params, r))) {
- /* Skip instruction, since it was emulated */
- kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
- return 1;
- }
- /* If access function fails, it should complain. */
+ perform_access(vcpu, params, r);
} else {
kvm_err("Unsupported guest sys_reg access at: %lx\n",
*vcpu_pc(vcpu));
print_sys_reg_instr(params);
+ kvm_inject_undefined(vcpu);
}
- kvm_inject_undefined(vcpu);
return 1;
}
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 33/79] arm64: KVM: Do not corrupt registers on failed 64bit CP read
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (31 preceding siblings ...)
2017-04-23 17:08 ` [PULL 32/79] arm64: KVM: Treat sysreg accessors returning false as successful Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 34/79] arm: KVM: Make unexpected register accesses inject an undef Christoffer Dall
` (46 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
If we fail to emulate a mrrc instruction, we:
1) deliver an exception,
2) spit a nastygram on the console,
3) write back some garbage to Rt/Rt2
While 1) and 2) are perfectly acceptable, 3) is out of the scope of
the architecture... Let's mimick the code in kvm_handle_cp_32 and
be more cautious.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2f4418e..582d68e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1678,20 +1678,25 @@ static int kvm_handle_cp_64(struct kvm_vcpu *vcpu,
params.regval |= vcpu_get_reg(vcpu, Rt2) << 32;
}
- if (!emulate_cp(vcpu, ¶ms, target_specific, nr_specific))
- goto out;
- if (!emulate_cp(vcpu, ¶ms, global, nr_global))
- goto out;
-
- unhandled_cp_access(vcpu, ¶ms);
+ /*
+ * Try to emulate the coprocessor access using the target
+ * specific table first, and using the global table afterwards.
+ * If either of the tables contains a handler, handle the
+ * potential register operation in the case of a read and return
+ * with success.
+ */
+ if (!emulate_cp(vcpu, ¶ms, target_specific, nr_specific) ||
+ !emulate_cp(vcpu, ¶ms, global, nr_global)) {
+ /* Split up the value between registers for the read side */
+ if (!params.is_write) {
+ vcpu_set_reg(vcpu, Rt, lower_32_bits(params.regval));
+ vcpu_set_reg(vcpu, Rt2, upper_32_bits(params.regval));
+ }
-out:
- /* Split up the value between registers for the read side */
- if (!params.is_write) {
- vcpu_set_reg(vcpu, Rt, lower_32_bits(params.regval));
- vcpu_set_reg(vcpu, Rt2, upper_32_bits(params.regval));
+ return 1;
}
+ unhandled_cp_access(vcpu, ¶ms);
return 1;
}
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 34/79] arm: KVM: Make unexpected register accesses inject an undef
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (32 preceding siblings ...)
2017-04-23 17:08 ` [PULL 33/79] arm64: KVM: Do not corrupt registers on failed 64bit CP read Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 35/79] arm: KVM: Treat CP15 accessors returning false as successful Christoffer Dall
` (45 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Reads from write-only system registers are generally confined to
EL1 and not propagated to EL2 (that's what the architecture
mantates). In order to be sure that we have a sane behaviour
even in the unlikely event that we have a broken system, we still
handle it in KVM. Same goes for write to RO registers.
In that case, let's inject an undef into the guest.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/coproc.c | 18 ++++++++++++++++++
arch/arm/kvm/coproc.h | 18 ------------------
2 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 3e5e419..519aac1 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -40,6 +40,24 @@
* Co-processor emulation
*****************************************************************************/
+static bool write_to_read_only(struct kvm_vcpu *vcpu,
+ const struct coproc_params *params)
+{
+ WARN_ONCE(1, "CP15 write to read-only register\n");
+ print_cp_instr(params);
+ kvm_inject_undefined(vcpu);
+ return false;
+}
+
+static bool read_from_write_only(struct kvm_vcpu *vcpu,
+ const struct coproc_params *params)
+{
+ WARN_ONCE(1, "CP15 read to write-only register\n");
+ print_cp_instr(params);
+ kvm_inject_undefined(vcpu);
+ return false;
+}
+
/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
static u32 cache_levels;
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index eef1759..3a41b7d 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -81,24 +81,6 @@ static inline bool read_zero(struct kvm_vcpu *vcpu,
return true;
}
-static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
- const struct coproc_params *params)
-{
- kvm_debug("CP15 write to read-only register at: %08lx\n",
- *vcpu_pc(vcpu));
- print_cp_instr(params);
- return false;
-}
-
-static inline bool read_from_write_only(struct kvm_vcpu *vcpu,
- const struct coproc_params *params)
-{
- kvm_debug("CP15 read to write-only register at: %08lx\n",
- *vcpu_pc(vcpu));
- print_cp_instr(params);
- return false;
-}
-
/* Reset functions */
static inline void reset_unknown(struct kvm_vcpu *vcpu,
const struct coproc_reg *r)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 35/79] arm: KVM: Treat CP15 accessors returning false as successful
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (33 preceding siblings ...)
2017-04-23 17:08 ` [PULL 34/79] arm: KVM: Make unexpected register accesses inject an undef Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 36/79] arm64: hyp-stub: Stop pointlessly clobbering lr Christoffer Dall
` (44 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Instead of considering that a CP15 accessor has failed when
returning false, let's consider that it is *always* successful
(after all, we won't stand for an incomplete emulation).
The return value now simply indicates whether we should skip
the instruction (because it has now been emulated), or if we
should leave the PC alone if the emulation has injected an
exception.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/coproc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 519aac1..2c14b69 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -520,15 +520,15 @@ static int emulate_cp15(struct kvm_vcpu *vcpu,
if (likely(r->access(vcpu, params, r))) {
/* Skip instruction, since it was emulated */
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
- return 1;
}
- /* If access function fails, it should complain. */
} else {
+ /* If access function fails, it should complain. */
kvm_err("Unsupported guest CP15 access at: %08lx\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
+ kvm_inject_undefined(vcpu);
}
- kvm_inject_undefined(vcpu);
+
return 1;
}
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 36/79] arm64: hyp-stub: Stop pointlessly clobbering lr
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (34 preceding siblings ...)
2017-04-23 17:08 ` [PULL 35/79] arm: KVM: Treat CP15 accessors returning false as successful Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 37/79] arm64: KVM: Move lr save/restore to do_el2_call Christoffer Dall
` (43 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
When entering the kernel hyp stub, we check whether or not we've
made it here through an HVC instruction, clobbering lr (aka x30)
in the process.
This is completely pointless, as HVC is the only way to get here
(all traps to EL2 are disabled, no interrupt override is applied).
So let's remove this bit of code whose only point is to corrupt
a valuable register.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kernel/hyp-stub.S | 6 ------
1 file changed, 6 deletions(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index d3b5f75..e4215ad 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -55,12 +55,6 @@ ENDPROC(__hyp_stub_vectors)
.align 11
el1_sync:
- mrs x30, esr_el2
- lsr x30, x30, #ESR_ELx_EC_SHIFT
-
- cmp x30, #ESR_ELx_EC_HVC64
- b.ne 9f // Not an HVC trap
-
cmp x0, #HVC_GET_VECTORS
b.ne 1f
mrs x0, vbar_el2
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 37/79] arm64: KVM: Move lr save/restore to do_el2_call
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (35 preceding siblings ...)
2017-04-23 17:08 ` [PULL 36/79] arm64: hyp-stub: Stop pointlessly clobbering lr Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 38/79] arm64: hyp-stub: Don't save lr in the EL1 code Christoffer Dall
` (42 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
At the moment, we only save/restore lr if on VHE, as we rely only
the EL1 code to have preserved it in the non-VHE case.
As we're about to get rid of the latter, let's move the save/restore
code to the do_el2_call macro, unifying both code paths.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kvm/hyp.S | 3 ---
arch/arm64/kvm/hyp/hyp-entry.S | 4 ++--
2 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 2726635..f6f20b5 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -38,13 +38,10 @@
* A function pointer with a value less than 0xfff has a special meaning,
* and is used to implement __hyp_get_vectors in the same way as in
* arch/arm64/kernel/hyp_stub.S.
- * HVC behaves as a 'bl' call and will clobber lr.
*/
ENTRY(__kvm_call_hyp)
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
- str lr, [sp, #-16]!
hvc #0
- ldr lr, [sp], #16
ret
alternative_else_nop_endif
b __vhe_hyp_call
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 5e9052f..d8ef788 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -32,17 +32,17 @@
* Shuffle the parameters before calling the function
* pointed to in x0. Assumes parameters in x[1,2,3].
*/
+ str lr, [sp, #-16]!
mov lr, x0
mov x0, x1
mov x1, x2
mov x2, x3
blr lr
+ ldr lr, [sp], #16
.endm
ENTRY(__vhe_hyp_call)
- str lr, [sp, #-16]!
do_el2_call
- ldr lr, [sp], #16
/*
* We used to rely on having an exception return to get
* an implicit isb. In the E2H case, we don't have it anymore.
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 38/79] arm64: hyp-stub: Don't save lr in the EL1 code
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (36 preceding siblings ...)
2017-04-23 17:08 ` [PULL 37/79] arm64: KVM: Move lr save/restore to do_el2_call Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 39/79] arm64: hyp-stub: Define a return value for failed stub calls Christoffer Dall
` (41 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
The EL2 code is not corrupting lr anymore, so don't bother preserving
it in the EL1 trampoline code.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kernel/hyp-stub.S | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index e4215ad..193dfb2 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -116,18 +116,14 @@ ENDPROC(\label)
*/
ENTRY(__hyp_get_vectors)
- str lr, [sp, #-16]!
mov x0, #HVC_GET_VECTORS
hvc #0
- ldr lr, [sp], #16
ret
ENDPROC(__hyp_get_vectors)
ENTRY(__hyp_set_vectors)
- str lr, [sp, #-16]!
mov x1, x0
mov x0, #HVC_SET_VECTORS
hvc #0
- ldr lr, [sp], #16
ret
ENDPROC(__hyp_set_vectors)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 39/79] arm64: hyp-stub: Define a return value for failed stub calls
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (37 preceding siblings ...)
2017-04-23 17:08 ` [PULL 38/79] arm64: hyp-stub: Don't save lr in the EL1 code Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 40/79] arm64: hyp-stub: Update documentation in asm/virt.h Christoffer Dall
` (40 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Define a standard return value to be returned when a hyp stub
call fails, and make KVM use it for ARM_EXCEPTION_HYP_GONE
(instead of using a KVM-specific value).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/include/asm/kvm_asm.h | 2 +-
arch/arm64/include/asm/virt.h | 3 +++
arch/arm64/kernel/hyp-stub.S | 2 +-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 49f99cd..b7e4ef5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -28,7 +28,7 @@
#define ARM_EXCEPTION_EL1_SERROR 1
#define ARM_EXCEPTION_TRAP 2
/* The hyp-stub will return this for any kvm_call_hyp() call */
-#define ARM_EXCEPTION_HYP_GONE 3
+#define ARM_EXCEPTION_HYP_GONE HVC_STUB_ERR
#define KVM_ARM64_DEBUG_DIRTY_SHIFT 0
#define KVM_ARM64_DEBUG_DIRTY (1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 439f6b5..1466d14 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -39,6 +39,9 @@
*/
#define HVC_SOFT_RESTART 2
+/* Error returned when an invalid stub number is passed into x0 */
+#define HVC_STUB_ERR 0xbadca11
+
#define BOOT_CPU_MODE_EL1 (0xe11)
#define BOOT_CPU_MODE_EL2 (0xe12)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 193dfb2..f53e8b8 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -74,7 +74,7 @@ el1_sync:
br x4 // no return
/* Someone called kvm_call_hyp() against the hyp-stub... */
-3: mov x0, #ARM_EXCEPTION_HYP_GONE
+3: ldr x0, =HVC_STUB_ERR
9: eret
ENDPROC(el1_sync)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 40/79] arm64: hyp-stub: Update documentation in asm/virt.h
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (38 preceding siblings ...)
2017-04-23 17:08 ` [PULL 39/79] arm64: hyp-stub: Define a return value for failed stub calls Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 41/79] arm64: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall Christoffer Dall
` (39 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Comments in asm/virt.h are slightly out of date, so let's align
them with the new behaviour of the code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/include/asm/virt.h | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 1466d14..1569c3a 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -19,9 +19,14 @@
#define __ASM__VIRT_H
/*
- * The arm64 hcall implementation uses x0 to specify the hcall type. A value
- * less than 0xfff indicates a special hcall, such as get/set vector.
- * Any other value is used as a pointer to the function to call.
+ * The arm64 hcall implementation uses x0 to specify the hcall
+ * number. A value less than HVC_STUB_HCALL_NR indicates a special
+ * hcall, such as set vector. Any other value is handled in a
+ * hypervisor specific way.
+ *
+ * The hypercall is allowed to clobber any of the caller-saved
+ * registers (x0-x18), so it is advisable to use it through the
+ * indirection of a function call (as implemented in hyp-stub.S).
*/
/* HVC_GET_VECTORS - Return the value of the vbar_el2 register. */
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 41/79] arm64: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (39 preceding siblings ...)
2017-04-23 17:08 ` [PULL 40/79] arm64: hyp-stub: Update documentation in asm/virt.h Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 42/79] arm64: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code Christoffer Dall
` (38 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Let's define a new stub hypercall that resets the HYP configuration
to its default: hyp-stub vectors, and MMU disabled.
Of course, for the hyp-stub itself, this is a trivial no-op.
Hypervisors will have a bit more work to do.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/include/asm/virt.h | 9 +++++++++
arch/arm64/kernel/hyp-stub.S | 11 ++++++++++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 1569c3a..435514c 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -44,6 +44,14 @@
*/
#define HVC_SOFT_RESTART 2
+/*
+ * HVC_RESET_VECTORS - Restore the vectors to the original HYP stubs
+ */
+#define HVC_RESET_VECTORS 3
+
+/* Max number of HYP stub hypercalls */
+#define HVC_STUB_HCALL_NR 4
+
/* Error returned when an invalid stub number is passed into x0 */
#define HVC_STUB_ERR 0xbadca11
@@ -70,6 +78,7 @@ extern u32 __boot_cpu_mode[2];
void __hyp_set_vectors(phys_addr_t phys_vector_base);
phys_addr_t __hyp_get_vectors(void);
+void __hyp_reset_vectors(void);
/* Reports the availability of HYP mode */
static inline bool is_hyp_mode_available(void)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index f53e8b8..8226fd9 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -73,8 +73,11 @@ el1_sync:
mov x1, x3
br x4 // no return
+3: cmp x0, #HVC_RESET_VECTORS
+ beq 9f // Nothing to reset!
+
/* Someone called kvm_call_hyp() against the hyp-stub... */
-3: ldr x0, =HVC_STUB_ERR
+ ldr x0, =HVC_STUB_ERR
9: eret
ENDPROC(el1_sync)
@@ -127,3 +130,9 @@ ENTRY(__hyp_set_vectors)
hvc #0
ret
ENDPROC(__hyp_set_vectors)
+
+ENTRY(__hyp_reset_vectors)
+ mov x0, #HVC_RESET_VECTORS
+ hvc #0
+ ret
+ENDPROC(__hyp_reset_vectors)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 42/79] arm64: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (40 preceding siblings ...)
2017-04-23 17:08 ` [PULL 41/79] arm64: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 43/79] arm64: KVM: Implement HVC_GET_VECTORS " Christoffer Dall
` (37 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
In order to restore HYP mode to its original condition, KVM currently
implements __kvm_hyp_reset(). As we're moving towards a hyp-stub
defined API, it becomes necessary to implement HVC_RESET_VECTORS.
This patch adds the HVC_RESET_VECTORS hypercall to the KVM init
code, which so far lacked any form of hypercall support.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kvm/hyp-init.S | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 6b29d3d..5e39ad5 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -22,6 +22,7 @@
#include <asm/kvm_mmu.h>
#include <asm/pgtable-hwdef.h>
#include <asm/sysreg.h>
+#include <asm/virt.h>
.text
.pushsection .hyp.idmap.text, "ax"
@@ -58,6 +59,9 @@ __invalid:
* x2: HYP vectors
*/
__do_hyp_init:
+ /* Check for a stub HVC call */
+ cmp x0, #HVC_STUB_HCALL_NR
+ b.lo __kvm_handle_stub_hvc
msr ttbr0_el2, x0
@@ -119,6 +123,9 @@ __do_hyp_init:
eret
ENDPROC(__kvm_hyp_init)
+ENTRY(__kvm_handle_stub_hvc)
+ cmp x0, #HVC_RESET_VECTORS
+ b.ne 1f
/*
* Reset kvm back to the hyp stub.
*/
@@ -133,9 +140,15 @@ ENTRY(__kvm_hyp_reset)
/* Install stub vectors */
adr_l x0, __hyp_stub_vectors
msr vbar_el2, x0
+ b exit
+1: /* Bad stub call */
+ ldr x0, =HVC_STUB_ERR
+
+exit:
eret
ENDPROC(__kvm_hyp_reset)
+ENDPROC(__kvm_handle_stub_hvc)
.ltorg
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 43/79] arm64: KVM: Implement HVC_GET_VECTORS in the init code
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (41 preceding siblings ...)
2017-04-23 17:08 ` [PULL 42/79] arm64: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 44/79] arm64: KVM: Allow the main HYP code to use the init hyp stub implementation Christoffer Dall
` (36 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Now that we have an infrastructure to handle hypercalls in the KVM
init code, let's implement HVC_GET_VECTORS there.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kvm/hyp-init.S | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 5e39ad5..fded932 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -124,7 +124,12 @@ __do_hyp_init:
ENDPROC(__kvm_hyp_init)
ENTRY(__kvm_handle_stub_hvc)
- cmp x0, #HVC_RESET_VECTORS
+ cmp x0, #HVC_GET_VECTORS
+ b.ne 1f
+ mrs x0, vbar_el2
+ b exit
+
+1: cmp x0, #HVC_RESET_VECTORS
b.ne 1f
/*
* Reset kvm back to the hyp stub.
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 44/79] arm64: KVM: Allow the main HYP code to use the init hyp stub implementation
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (42 preceding siblings ...)
2017-04-23 17:08 ` [PULL 43/79] arm64: KVM: Implement HVC_GET_VECTORS " Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 45/79] arm64: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors Christoffer Dall
` (35 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
We now have a full hyp-stub implementation in the KVM init code,
but the main KVM code only supports HVC_GET_VECTORS, which is not
enough.
Instead of reinventing the wheel, let's reuse the init implementation
by branching to the idmap page when called with a hyp-stub hypercall.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kvm/hyp/hyp-entry.S | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index d8ef788..4f34c59 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -87,10 +87,24 @@ alternative_endif
/* Here, we're pretty sure the host called HVC. */
ldp x0, x1, [sp], #16
- cmp x0, #HVC_GET_VECTORS
- b.ne 1f
- mrs x0, vbar_el2
- b 2f
+ /* Check for a stub HVC call */
+ cmp x0, #HVC_STUB_HCALL_NR
+ b.hs 1f
+
+ /*
+ * Compute the idmap address of __kvm_handle_stub_hvc and
+ * jump there. Since we use kimage_voffset, do not use the
+ * HYP VA for __kvm_handle_stub_hvc, but the kernel VA instead
+ * (by loading it from the constant pool).
+ *
+ * Preserve x0-x4, which may contain stub parameters.
+ */
+ ldr x5, =__kvm_handle_stub_hvc
+ ldr_l x6, kimage_voffset
+
+ /* x5 = __pa(x5) */
+ sub x5, x5, x6
+ br x5
1:
/*
@@ -99,7 +113,7 @@ alternative_endif
kern_hyp_va x0
do_el2_call
-2: eret
+ eret
el1_trap:
/*
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 45/79] arm64: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (43 preceding siblings ...)
2017-04-23 17:08 ` [PULL 44/79] arm64: KVM: Allow the main HYP code to use the init hyp stub implementation Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 46/79] arm64: KVM: Implement HVC_SOFT_RESTART in the init code Christoffer Dall
` (34 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
We are now able to use the hyp stub to reset HYP mode. Time to
kiss __kvm_hyp_reset goodbye, and use __hyp_reset_vectors.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/include/asm/kvm_asm.h | 1 -
arch/arm64/include/asm/kvm_host.h | 3 +--
arch/arm64/kvm/hyp-init.S | 2 --
arch/arm64/kvm/hyp/hyp-entry.S | 15 ---------------
4 files changed, 1 insertion(+), 20 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index b7e4ef5..26a64d0 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -47,7 +47,6 @@ struct kvm_vcpu;
extern char __kvm_hyp_init[];
extern char __kvm_hyp_init_end[];
-extern char __kvm_hyp_reset[];
extern char __kvm_hyp_vector[];
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e7705e7..0355dd1 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -362,11 +362,10 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
}
-void __kvm_hyp_teardown(void);
static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr,
phys_addr_t phys_idmap_start)
{
- kvm_call_hyp(__kvm_hyp_teardown, phys_idmap_start);
+ __hyp_reset_vectors();
}
static inline void kvm_arch_hardware_unsetup(void) {}
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index fded932..b7a8f12 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -134,7 +134,6 @@ ENTRY(__kvm_handle_stub_hvc)
/*
* Reset kvm back to the hyp stub.
*/
-ENTRY(__kvm_hyp_reset)
/* We're now in idmap, disable MMU */
mrs x0, sctlr_el2
ldr x1, =SCTLR_ELx_FLAGS
@@ -152,7 +151,6 @@ ENTRY(__kvm_hyp_reset)
exit:
eret
-ENDPROC(__kvm_hyp_reset)
ENDPROC(__kvm_handle_stub_hvc)
.ltorg
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 4f34c59..5170ce1 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -53,21 +53,6 @@ ENTRY(__vhe_hyp_call)
ret
ENDPROC(__vhe_hyp_call)
-/*
- * Compute the idmap address of __kvm_hyp_reset based on the idmap
- * start passed as a parameter, and jump there.
- *
- * x0: HYP phys_idmap_start
- */
-ENTRY(__kvm_hyp_teardown)
- mov x4, x0
- adr_l x3, __kvm_hyp_reset
-
- /* insert __kvm_hyp_reset()s offset into phys_idmap_start */
- bfi x4, x3, #0, #PAGE_SHIFT
- br x4
-ENDPROC(__kvm_hyp_teardown)
-
el1_sync: // Guest trapped into EL2
stp x0, x1, [sp, #-16]!
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 46/79] arm64: KVM: Implement HVC_SOFT_RESTART in the init code
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (44 preceding siblings ...)
2017-04-23 17:08 ` [PULL 45/79] arm64: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 47/79] ARM: hyp-stub: improve ABI Christoffer Dall
` (33 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Another missing stub hypercall is HVC_SOFT_RESTART. It turns out
that it is pretty easy to implement in terms of HVC_RESET_VECTORS
(since it needs to turn the MMU off).
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kvm/hyp-init.S | 31 +++++++++++++++++++++++--------
1 file changed, 23 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index b7a8f12..0ad34fd 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -129,21 +129,36 @@ ENTRY(__kvm_handle_stub_hvc)
mrs x0, vbar_el2
b exit
+1: cmp x0, #HVC_SOFT_RESTART
+ b.ne 1f
+
+ /* This is where we're about to jump, staying at EL2 */
+ msr elr_el2, x1
+ mov x0, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT | PSR_MODE_EL2h)
+ msr spsr_el2, x0
+
+ /* Shuffle the arguments, and don't come back */
+ mov x0, x2
+ mov x1, x3
+ mov x2, x4
+ b reset
+
1: cmp x0, #HVC_RESET_VECTORS
b.ne 1f
+reset:
/*
- * Reset kvm back to the hyp stub.
+ * Reset kvm back to the hyp stub. Do not clobber x0-x4 in
+ * case we coming via HVC_SOFT_RESTART.
*/
- /* We're now in idmap, disable MMU */
- mrs x0, sctlr_el2
- ldr x1, =SCTLR_ELx_FLAGS
- bic x0, x0, x1 // Clear SCTL_M and etc
- msr sctlr_el2, x0
+ mrs x5, sctlr_el2
+ ldr x6, =SCTLR_ELx_FLAGS
+ bic x5, x5, x6 // Clear SCTL_M and etc
+ msr sctlr_el2, x5
isb
/* Install stub vectors */
- adr_l x0, __hyp_stub_vectors
- msr vbar_el2, x0
+ adr_l x5, __hyp_stub_vectors
+ msr vbar_el2, x5
b exit
1: /* Bad stub call */
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 47/79] ARM: hyp-stub: improve ABI
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (45 preceding siblings ...)
2017-04-23 17:08 ` [PULL 46/79] arm64: KVM: Implement HVC_SOFT_RESTART in the init code Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 48/79] ARM: soft-reboot into same mode that we entered the kernel Christoffer Dall
` (32 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Russell King <rmk+kernel@armlinux.org.uk>
Improve the hyp-stub ABI to allow it to do more than just get/set the
vectors. We follow the example in ARM64, where r0 is used as an opcode
with the other registers as an argument.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kernel/hyp-stub.S | 27 ++++++++++++++++++++++-----
1 file changed, 22 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index 15d073a..f3e9ba5 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -22,6 +22,9 @@
#include <asm/assembler.h>
#include <asm/virt.h>
+#define HVC_GET_VECTORS 0
+#define HVC_SET_VECTORS 1
+
#ifndef ZIMAGE
/*
* For the kernel proper, we need to find out the CPU boot mode long after
@@ -202,9 +205,19 @@ ARM_BE8(orr r7, r7, #(1 << 25)) @ HSCTLR.EE
ENDPROC(__hyp_stub_install_secondary)
__hyp_stub_do_trap:
- cmp r0, #-1
- mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
- mcrne p15, 4, r0, c12, c0, 0 @ set HVBAR
+ teq r0, #HVC_GET_VECTORS
+ bne 1f
+ mrc p15, 4, r0, c12, c0, 0 @ get HVBAR
+ b __hyp_stub_exit
+
+1: teq r0, #HVC_SET_VECTORS
+ bne 1f
+ mcr p15, 4, r1, c12, c0, 0 @ set HVBAR
+ b __hyp_stub_exit
+
+1: mov r0, #-1
+
+__hyp_stub_exit:
__ERET
ENDPROC(__hyp_stub_do_trap)
@@ -231,10 +244,14 @@ ENDPROC(__hyp_stub_do_trap)
* initialisation entry point.
*/
ENTRY(__hyp_get_vectors)
- mov r0, #-1
+ mov r0, #HVC_GET_VECTORS
+ __HVC(0)
+ ret lr
ENDPROC(__hyp_get_vectors)
- @ fall through
+
ENTRY(__hyp_set_vectors)
+ mov r1, r0
+ mov r0, #HVC_SET_VECTORS
__HVC(0)
ret lr
ENDPROC(__hyp_set_vectors)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 48/79] ARM: soft-reboot into same mode that we entered the kernel
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (46 preceding siblings ...)
2017-04-23 17:08 ` [PULL 47/79] ARM: hyp-stub: improve ABI Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:08 ` [PULL 49/79] ARM: KVM: Convert KVM to use HVC_GET_VECTORS Christoffer Dall
` (31 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Russell King <rmk+kernel@armlinux.org.uk>
When we soft-reboot (eg, kexec) from one kernel into the next, we need
to ensure that we enter the new kernel in the same processor mode as
when we were entered, so that (eg) the new kernel can install its own
hypervisor - the old kernel's hypervisor will have been overwritten.
In order to do this, we need to pass a flag to cpu_reset() so it knows
what to do, and we need to modify the kernel's own hypervisor stub to
allow it to handle a soft-reboot.
As we are always guaranteed to install our own hypervisor if we're
entered in HYP32 mode, and KVM will have moved itself out of the way
on kexec/normal reboot, we can assume that our hypervisor is in place
when we want to kexec, so changing our hypervisor API should not be a
problem.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/proc-fns.h | 4 ++--
arch/arm/kernel/hyp-stub.S | 13 +++++++++++++
arch/arm/kernel/reboot.c | 7 +++++--
arch/arm/mm/proc-v7.S | 12 ++++++++----
4 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index 8877ad5..f2e1af4 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -43,7 +43,7 @@ extern struct processor {
/*
* Special stuff for a reset
*/
- void (*reset)(unsigned long addr) __attribute__((noreturn));
+ void (*reset)(unsigned long addr, bool hvc) __attribute__((noreturn));
/*
* Idle the processor
*/
@@ -88,7 +88,7 @@ extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte);
#else
extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte, unsigned int ext);
#endif
-extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
+extern void cpu_reset(unsigned long addr, bool hvc) __attribute__((noreturn));
/* These three are private to arch/arm/kernel/suspend.c */
extern void cpu_do_suspend(void *);
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index f3e9ba5..8291523 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -24,6 +24,7 @@
#define HVC_GET_VECTORS 0
#define HVC_SET_VECTORS 1
+#define HVC_SOFT_RESTART 2
#ifndef ZIMAGE
/*
@@ -215,6 +216,10 @@ __hyp_stub_do_trap:
mcr p15, 4, r1, c12, c0, 0 @ set HVBAR
b __hyp_stub_exit
+1: teq r0, #HVC_SOFT_RESTART
+ bne 1f
+ bx r3
+
1: mov r0, #-1
__hyp_stub_exit:
@@ -256,6 +261,14 @@ ENTRY(__hyp_set_vectors)
ret lr
ENDPROC(__hyp_set_vectors)
+ENTRY(__hyp_soft_restart)
+ mov r3, r0
+ mov r0, #HVC_SOFT_RESTART
+ __HVC(0)
+ mov r0, r3
+ ret lr
+ENDPROC(__hyp_soft_restart)
+
#ifndef ZIMAGE
.align 2
.L__boot_cpu_mode_offset:
diff --git a/arch/arm/kernel/reboot.c b/arch/arm/kernel/reboot.c
index 3fa867a..3b2aa9a 100644
--- a/arch/arm/kernel/reboot.c
+++ b/arch/arm/kernel/reboot.c
@@ -12,10 +12,11 @@
#include <asm/cacheflush.h>
#include <asm/idmap.h>
+#include <asm/virt.h>
#include "reboot.h"
-typedef void (*phys_reset_t)(unsigned long);
+typedef void (*phys_reset_t)(unsigned long, bool);
/*
* Function pointers to optional machine specific functions
@@ -51,7 +52,9 @@ static void __soft_restart(void *addr)
/* Switch to the identity mapping. */
phys_reset = (phys_reset_t)virt_to_idmap(cpu_reset);
- phys_reset((unsigned long)addr);
+
+ /* original stub should be restored by kvm */
+ phys_reset((unsigned long)addr, is_hyp_mode_available());
/* Should never get here. */
BUG();
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index d00d52c..1846ca4 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -53,11 +53,15 @@ ENDPROC(cpu_v7_proc_fin)
.align 5
.pushsection .idmap.text, "ax"
ENTRY(cpu_v7_reset)
- mrc p15, 0, r1, c1, c0, 0 @ ctrl register
- bic r1, r1, #0x1 @ ...............m
- THUMB( bic r1, r1, #1 << 30 ) @ SCTLR.TE (Thumb exceptions)
- mcr p15, 0, r1, c1, c0, 0 @ disable MMU
+ mrc p15, 0, r2, c1, c0, 0 @ ctrl register
+ bic r2, r2, #0x1 @ ...............m
+ THUMB( bic r2, r2, #1 << 30 ) @ SCTLR.TE (Thumb exceptions)
+ mcr p15, 0, r2, c1, c0, 0 @ disable MMU
isb
+#ifdef CONFIG_ARM_VIRT_EXT
+ teq r1, #0
+ bne __hyp_soft_restart
+#endif
bx r0
ENDPROC(cpu_v7_reset)
.popsection
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 49/79] ARM: KVM: Convert KVM to use HVC_GET_VECTORS
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (47 preceding siblings ...)
2017-04-23 17:08 ` [PULL 48/79] ARM: soft-reboot into same mode that we entered the kernel Christoffer Dall
@ 2017-04-23 17:08 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 50/79] ARM: Update cpu_v7_reset documentation Christoffer Dall
` (30 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:08 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
The conversion of the HYP stub ABI to something similar to arm64
left the KVM code broken, as it doesn't know about the new
stub numbering. Let's move the various #defines to virt.h, and
let KVM use HVC_GET_VECTORS.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/virt.h | 8 ++++++++
arch/arm/kernel/hyp-stub.S | 4 ----
arch/arm/kvm/hyp/hyp-entry.S | 2 +-
3 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index 6dae195..4ea16fc 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -94,6 +94,14 @@ extern char __hyp_text_start[];
extern char __hyp_text_end[];
#endif
+#else
+
+/* Only assembly code should need those */
+
+#define HVC_GET_VECTORS 0
+#define HVC_SET_VECTORS 1
+#define HVC_SOFT_RESTART 2
+
#endif /* __ASSEMBLY__ */
#endif /* ! VIRT_H */
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index 8291523..8301db9 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -22,10 +22,6 @@
#include <asm/assembler.h>
#include <asm/virt.h>
-#define HVC_GET_VECTORS 0
-#define HVC_SET_VECTORS 1
-#define HVC_SOFT_RESTART 2
-
#ifndef ZIMAGE
/*
* For the kernel proper, we need to find out the CPU boot mode long after
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 96beb53..1f8db7d 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -127,7 +127,7 @@ hyp_hvc:
pop {r0, r1, r2}
/* Check for __hyp_get_vectors */
- cmp r0, #-1
+ cmp r0, #HVC_GET_VECTORS
mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
beq 1f
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 50/79] ARM: Update cpu_v7_reset documentation
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (48 preceding siblings ...)
2017-04-23 17:08 ` [PULL 49/79] ARM: KVM: Convert KVM to use HVC_GET_VECTORS Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 51/79] ARM: hyp-stub: Use r1 for the soft-restart address Christoffer Dall
` (29 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
cpu_v7_reset() now takes a second parameter indicating whether
we should reboot in HYP or not. Update the documentation to
reflect this.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/mm/proc-v7.S | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 1846ca4..01d64c0 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -39,13 +39,14 @@ ENTRY(cpu_v7_proc_fin)
ENDPROC(cpu_v7_proc_fin)
/*
- * cpu_v7_reset(loc)
+ * cpu_v7_reset(loc, hyp)
*
* Perform a soft reset of the system. Put the CPU into the
* same state as it would be if it had been reset, and branch
* to what would be the reset vector.
*
* - loc - location to jump to for soft reset
+ * - hyp - indicate if restart occurs in HYP mode
*
* This code must be executed using a flat identity mapping with
* caches disabled.
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 51/79] ARM: hyp-stub: Use r1 for the soft-restart address
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (49 preceding siblings ...)
2017-04-23 17:09 ` [PULL 50/79] ARM: Update cpu_v7_reset documentation Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 52/79] ARM: Expose the VA/IDMAP offset Christoffer Dall
` (28 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
It is not really obvious why the restart address should be in r3
when communicated to the hyp-stub. r1 should be perfectly adequate,
and consistent with the rest of the code.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kernel/hyp-stub.S | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index 8301db9..15eaa14 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -214,7 +214,7 @@ __hyp_stub_do_trap:
1: teq r0, #HVC_SOFT_RESTART
bne 1f
- bx r3
+ bx r1
1: mov r0, #-1
@@ -258,10 +258,9 @@ ENTRY(__hyp_set_vectors)
ENDPROC(__hyp_set_vectors)
ENTRY(__hyp_soft_restart)
- mov r3, r0
+ mov r1, r0
mov r0, #HVC_SOFT_RESTART
__HVC(0)
- mov r0, r3
ret lr
ENDPROC(__hyp_soft_restart)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 52/79] ARM: Expose the VA/IDMAP offset
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (50 preceding siblings ...)
2017-04-23 17:09 ` [PULL 51/79] ARM: hyp-stub: Use r1 for the soft-restart address Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 53/79] ARM: hyp-stub: Define a return value for failed stub calls Christoffer Dall
` (27 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
The KVM code needs to be able to compute the address of
symbols in its idmap page (the equivalent of a virt_to_idmap()
call). Unfortunately, virt_to_idmap is slightly complicated,
depending on the use of arch_phys_to_idmap_offset or not, and
none of that is readily available at HYP.
Instead, expose a single kimage_voffset variable which contains the
offset between a kernel VA and its idmap address, enabling the
VA->IDMAP conversion. This allows the KVM code to behave similarily
to its arm64 counterpart.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/mm/mmu.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e016d7..e98a2b5 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -87,6 +87,8 @@ struct cachepolicy {
#define s2_policy(policy) 0
#endif
+unsigned long kimage_voffset __ro_after_init;
+
static struct cachepolicy cache_policies[] __initdata = {
{
.policy = "uncached",
@@ -1635,4 +1637,7 @@ void __init paging_init(const struct machine_desc *mdesc)
empty_zero_page = virt_to_page(zero_page);
__flush_dcache_page(NULL, empty_zero_page);
+
+ /* Compute the virt/idmap offset, mostly for the sake of KVM */
+ kimage_voffset = (unsigned long)&kimage_voffset - virt_to_idmap(&kimage_voffset);
}
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 53/79] ARM: hyp-stub: Define a return value for failed stub calls
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (51 preceding siblings ...)
2017-04-23 17:09 ` [PULL 52/79] ARM: Expose the VA/IDMAP offset Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 54/79] ARM: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall Christoffer Dall
` (26 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Define a standard return value to be returned when a hyp stub
call fails.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/virt.h | 2 ++
arch/arm/kernel/hyp-stub.S | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index 4ea16fc..c16f70d 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -104,4 +104,6 @@ extern char __hyp_text_end[];
#endif /* __ASSEMBLY__ */
+#define HVC_STUB_ERR 0xbadca11
+
#endif /* ! VIRT_H */
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index 15eaa14..b20ca88 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -216,7 +216,7 @@ __hyp_stub_do_trap:
bne 1f
bx r1
-1: mov r0, #-1
+1: ldr r0, =HVC_STUB_ERR
__hyp_stub_exit:
__ERET
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 54/79] ARM: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (52 preceding siblings ...)
2017-04-23 17:09 ` [PULL 53/79] ARM: hyp-stub: Define a return value for failed stub calls Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 55/79] ARM: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code Christoffer Dall
` (25 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Let's define a new stub hypercall that resets the HYP configuration
to its default: hyp-stub vectors, and MMU disabled.
Of course, for the hyp-stub itself, this is a trivial no-op.
Hypervisors will have a bit more work to do.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/virt.h | 3 +++
arch/arm/kernel/hyp-stub.S | 11 ++++++++++-
2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index c16f70d..c5a2757 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -101,6 +101,9 @@ extern char __hyp_text_end[];
#define HVC_GET_VECTORS 0
#define HVC_SET_VECTORS 1
#define HVC_SOFT_RESTART 2
+#define HVC_RESET_VECTORS 3
+
+#define HVC_STUB_HCALL_NR 4
#endif /* __ASSEMBLY__ */
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index b20ca88..e637854 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -216,7 +216,10 @@ __hyp_stub_do_trap:
bne 1f
bx r1
-1: ldr r0, =HVC_STUB_ERR
+1: teq r0, #HVC_RESET_VECTORS
+ beq __hyp_stub_exit
+
+ ldr r0, =HVC_STUB_ERR
__hyp_stub_exit:
__ERET
@@ -264,6 +267,12 @@ ENTRY(__hyp_soft_restart)
ret lr
ENDPROC(__hyp_soft_restart)
+ENTRY(__hyp_reset_vectors)
+ mov r0, #HVC_RESET_VECTORS
+ __HVC(0)
+ ret lr
+ENDPROC(__hyp_reset_vectors)
+
#ifndef ZIMAGE
.align 2
.L__boot_cpu_mode_offset:
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 55/79] ARM: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (53 preceding siblings ...)
2017-04-23 17:09 ` [PULL 54/79] ARM: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 56/79] ARM: KVM: Implement HVC_GET_VECTORS " Christoffer Dall
` (24 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
In order to restore HYP mode to its original condition, KVM currently
implements __kvm_hyp_reset(). As we're moving towards a hyp-stub
defined API, it becomes necessary to implement HVC_RESET_VECTORS.
This patch adds the HVC_RESET_VECTORS hypercall to the KVM init
code, which so far lacked any form of hypercall support.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/virt.h | 1 +
arch/arm/kernel/hyp-stub.S | 2 +-
arch/arm/kvm/init.S | 33 +++++++++++++++++++++++++++------
3 files changed, 29 insertions(+), 7 deletions(-)
diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index c5a2757..663adc0 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -54,6 +54,7 @@ static inline void sync_boot_mode(void)
void __hyp_set_vectors(unsigned long phys_vector_base);
unsigned long __hyp_get_vectors(void);
+void __hyp_reset_vectors(void);
#else
#define __boot_cpu_mode (SVC_MODE)
#define sync_boot_mode()
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index e637854..675c50f 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -280,7 +280,7 @@ ENDPROC(__hyp_reset_vectors)
#endif
.align 5
-__hyp_stub_vectors:
+ENTRY(__hyp_stub_vectors)
__hyp_stub_reset: W(b) .
__hyp_stub_und: W(b) .
__hyp_stub_svc: W(b) .
diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S
index bf89c91..86a7008 100644
--- a/arch/arm/kvm/init.S
+++ b/arch/arm/kvm/init.S
@@ -23,6 +23,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_mmu.h>
+#include <asm/virt.h>
/********************************************************************
* Hypervisor initialization
@@ -39,6 +40,10 @@
* - Setup the page tables
* - Enable the MMU
* - Profit! (or eret, if you only care about the code).
+ *
+ * Another possibility is to get a HYP stub hypercall.
+ * We discriminate between the two by checking if r0 contains a value
+ * that is less than HVC_STUB_HCALL_NR.
*/
.text
@@ -58,6 +63,10 @@ __kvm_hyp_init:
W(b) .
__do_hyp_init:
+ @ Check for a stub hypercall
+ cmp r0, #HVC_STUB_HCALL_NR
+ blo __kvm_handle_stub_hvc
+
@ Set stack pointer
mov sp, r0
@@ -112,19 +121,31 @@ __do_hyp_init:
eret
- @ r0 : stub vectors address
+ENTRY(__kvm_handle_stub_hvc)
+ cmp r0, #HVC_RESET_VECTORS
+ bne 1f
ENTRY(__kvm_hyp_reset)
/* We're now in idmap, disable MMU */
mrc p15, 4, r1, c1, c0, 0 @ HSCTLR
- ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I)
- bic r1, r1, r2
+ ldr r0, =(HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I)
+ bic r1, r1, r0
mcr p15, 4, r1, c1, c0, 0 @ HSCTLR
- /* Install stub vectors */
- mcr p15, 4, r0, c12, c0, 0 @ HVBAR
- isb
+ /*
+ * Install stub vectors, using ardb's VA->PA trick.
+ */
+0: adr r0, 0b @ PA(0)
+ movw r1, #:lower16:__hyp_stub_vectors - 0b @ VA(stub) - VA(0)
+ movt r1, #:upper16:__hyp_stub_vectors - 0b
+ add r1, r1, r0 @ PA(stub)
+ mcr p15, 4, r1, c12, c0, 0 @ HVBAR
+ b exit
+
+1: ldr r0, =HVC_STUB_ERR
+exit:
eret
+ENDPROC(__kvm_handle_stub_hvc)
ENDPROC(__kvm_hyp_reset)
.ltorg
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 56/79] ARM: KVM: Implement HVC_GET_VECTORS in the init code
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (54 preceding siblings ...)
2017-04-23 17:09 ` [PULL 55/79] ARM: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 57/79] ARM: KVM: Allow the main HYP code to use the init hyp stub implementation Christoffer Dall
` (23 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Now that we have an infrastructure to handle hypercalls in the KVM
init code, let's implement HVC_GET_VECTORS there.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kvm/init.S | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S
index 86a7008..d6b2f49 100644
--- a/arch/arm/kvm/init.S
+++ b/arch/arm/kvm/init.S
@@ -122,7 +122,12 @@ __do_hyp_init:
eret
ENTRY(__kvm_handle_stub_hvc)
- cmp r0, #HVC_RESET_VECTORS
+ cmp r0, #HVC_GET_VECTORS
+ bne 1f
+ mrc p15, 4, r0, c12, c0, 0 @ get HVBAR
+ b exit
+
+1: cmp r0, #HVC_RESET_VECTORS
bne 1f
ENTRY(__kvm_hyp_reset)
/* We're now in idmap, disable MMU */
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 57/79] ARM: KVM: Allow the main HYP code to use the init hyp stub implementation
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (55 preceding siblings ...)
2017-04-23 17:09 ` [PULL 56/79] ARM: KVM: Implement HVC_GET_VECTORS " Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 58/79] ARM: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors Christoffer Dall
` (22 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
We now have a full hyp-stub implementation in the KVM init code,
but the main KVM code only supports HVC_GET_VECTORS, which is not
enough.
Instead of reinventing the wheel, let's reuse the init implementation
by branching to the idmap page when called with a hyp-stub hypercall.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kvm/hyp/hyp-entry.S | 29 ++++++++++++++++++++++++-----
1 file changed, 24 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 1f8db7d..a35baa8 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -126,11 +126,30 @@ hyp_hvc:
*/
pop {r0, r1, r2}
- /* Check for __hyp_get_vectors */
- cmp r0, #HVC_GET_VECTORS
- mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
- beq 1f
+ /*
+ * Check if we have a kernel function, which is guaranteed to be
+ * bigger than the maximum hyp stub hypercall
+ */
+ cmp r0, #HVC_STUB_HCALL_NR
+ bhs 1f
+ /*
+ * Not a kernel function, treat it as a stub hypercall.
+ * Compute the physical address for __kvm_handle_stub_hvc
+ * (as the code lives in the idmaped page) and branch there.
+ * We hijack ip (r12) as a tmp register.
+ */
+ push {r1}
+ ldr r1, =kimage_voffset
+ ldr r1, [r1]
+ ldr ip, =__kvm_handle_stub_hvc
+ sub ip, ip, r1
+THUMB( add ip, ip, #1)
+ pop {r1}
+
+ bx ip
+
+1:
push {lr}
mov lr, r0
@@ -142,7 +161,7 @@ THUMB( orr lr, #1)
blx lr @ Call the HYP function
pop {lr}
-1: eret
+ eret
guest_trap:
load_vcpu r0 @ Load VCPU pointer to r0
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 58/79] ARM: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (56 preceding siblings ...)
2017-04-23 17:09 ` [PULL 57/79] ARM: KVM: Allow the main HYP code to use the init hyp stub implementation Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 59/79] ARM: KVM: Implement HVC_SOFT_RESTART in the init code Christoffer Dall
` (21 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
We are now able to use the hyp stub to reset HYP mode. Time to
kiss __kvm_hyp_reset goodbye, and use __hyp_reset_vectors.
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/kvm_asm.h | 2 --
arch/arm/include/asm/kvm_host.h | 2 +-
arch/arm/kvm/init.S | 2 --
3 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index dd16044..eae11b3 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -72,8 +72,6 @@ extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
extern void __init_stage2_translation(void);
-extern void __kvm_hyp_reset(unsigned long);
-
extern u64 __vgic_v3_get_ich_vtr_el2(void);
extern u64 __vgic_v3_read_vmcr(void);
extern void __vgic_v3_write_vmcr(u32 vmcr);
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 31ee468..adea307 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,7 +273,7 @@ static inline void __cpu_init_stage2(void)
static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr,
phys_addr_t phys_idmap_start)
{
- kvm_call_hyp((void *)virt_to_idmap(__kvm_hyp_reset), vector_ptr);
+ __hyp_reset_vectors();
}
static inline int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S
index d6b2f49..fb33609 100644
--- a/arch/arm/kvm/init.S
+++ b/arch/arm/kvm/init.S
@@ -129,7 +129,6 @@ ENTRY(__kvm_handle_stub_hvc)
1: cmp r0, #HVC_RESET_VECTORS
bne 1f
-ENTRY(__kvm_hyp_reset)
/* We're now in idmap, disable MMU */
mrc p15, 4, r1, c1, c0, 0 @ HSCTLR
ldr r0, =(HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I)
@@ -151,7 +150,6 @@ ENTRY(__kvm_hyp_reset)
exit:
eret
ENDPROC(__kvm_handle_stub_hvc)
-ENDPROC(__kvm_hyp_reset)
.ltorg
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 59/79] ARM: KVM: Implement HVC_SOFT_RESTART in the init code
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (57 preceding siblings ...)
2017-04-23 17:09 ` [PULL 58/79] ARM: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 60/79] ARM: KVM: Gracefully handle hyp-stubs being restored from under our feet Christoffer Dall
` (20 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Another missing stub hypercall is HVC_SOFT_RESTART. It turns out
that it is pretty easy to implement in terms of HVC_RESET_VECTORS
(since it needs to turn the MMU off).
Tested-by: Keerthy <j-keerthy@ti.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kvm/init.S | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S
index fb33609..e53360d 100644
--- a/arch/arm/kvm/init.S
+++ b/arch/arm/kvm/init.S
@@ -127,8 +127,22 @@ ENTRY(__kvm_handle_stub_hvc)
mrc p15, 4, r0, c12, c0, 0 @ get HVBAR
b exit
+1: cmp r0, #HVC_SOFT_RESTART
+ bne 1f
+
+ /* The target is expected in r1 */
+ msr ELR_hyp, r1
+ mrs r0, cpsr
+ bic r0, r0, #MODE_MASK
+ orr r0, r0, #HYP_MODE
+THUMB( orr r0, r0, #PSR_T_BIT )
+ msr spsr_cxsf, r0
+ b reset
+
1: cmp r0, #HVC_RESET_VECTORS
bne 1f
+
+reset:
/* We're now in idmap, disable MMU */
mrc p15, 4, r1, c1, c0, 0 @ HSCTLR
ldr r0, =(HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 60/79] ARM: KVM: Gracefully handle hyp-stubs being restored from under our feet
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (58 preceding siblings ...)
2017-04-23 17:09 ` [PULL 59/79] ARM: KVM: Implement HVC_SOFT_RESTART in the init code Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 61/79] arm/arm64: KVM: Use __hyp_reset_vectors() directly Christoffer Dall
` (19 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Should kvm_reboot() be invoked while guest is running, an IPI
wil be issued, forcing the guest to exit and HYP being reset to
the stubs. We will then try to reenter the guest, only to get
an error (HVC_STUB_ERR).
This patch allows this case to be gracefully handled by exiting
the run loop.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/kvm_asm.h | 2 +-
arch/arm/kvm/handle_exit.c | 8 ++++++++
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index eae11b3..14d68a4 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -33,7 +33,7 @@
#define ARM_EXCEPTION_IRQ 5
#define ARM_EXCEPTION_FIQ 6
#define ARM_EXCEPTION_HVC 7
-
+#define ARM_EXCEPTION_HYP_GONE HVC_STUB_ERR
/*
* The rr_lo_hi macro swaps a pair of registers depending on
* current endianness. It is used in conjunction with ldrd and strd
diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index 96af65a..5fd7968 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -160,6 +160,14 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
case ARM_EXCEPTION_DATA_ABORT:
kvm_inject_vabt(vcpu);
return 1;
+ case ARM_EXCEPTION_HYP_GONE:
+ /*
+ * HYP has been reset to the hyp-stub. This happens
+ * when a guest is pre-empted by kvm_reboot()'s
+ * shutdown call.
+ */
+ run->exit_reason = KVM_EXIT_FAIL_ENTRY;
+ return 0;
default:
kvm_pr_unimpl("Unsupported exception type: %d",
exception_index);
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 61/79] arm/arm64: KVM: Use __hyp_reset_vectors() directly
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (59 preceding siblings ...)
2017-04-23 17:09 ` [PULL 60/79] ARM: KVM: Gracefully handle hyp-stubs being restored from under our feet Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 62/79] arm/arm64: KVM: Remove kvm_get_idmap_start Christoffer Dall
` (18 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
__cpu_reset_hyp_mode doesn't need to be passed any argument now,
as the hyp-stub implementations are self-contained, and is now
reduced to just calling __hyp_reset_vectors(). Let's drop the
wrapper and use the stub hypercall directly.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/kvm_host.h | 6 ------
arch/arm/kvm/arm.c | 3 +--
arch/arm64/include/asm/kvm_host.h | 6 ------
3 files changed, 1 insertion(+), 14 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index adea307..d488b88 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -270,12 +270,6 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
}
-static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr,
- phys_addr_t phys_idmap_start)
-{
- __hyp_reset_vectors();
-}
-
static inline int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
{
return 0;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 46fd375..c8f4fa6 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -1130,8 +1130,7 @@ static void cpu_hyp_reinit(void)
static void cpu_hyp_reset(void)
{
if (!is_kernel_in_hyp_mode())
- __cpu_reset_hyp_mode(hyp_default_vectors,
- kvm_get_idmap_start());
+ __hyp_reset_vectors();
}
static void _kvm_arch_hardware_enable(void *discard)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0355dd1..578df18 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -362,12 +362,6 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
}
-static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr,
- phys_addr_t phys_idmap_start)
-{
- __hyp_reset_vectors();
-}
-
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 62/79] arm/arm64: KVM: Remove kvm_get_idmap_start
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (60 preceding siblings ...)
2017-04-23 17:09 ` [PULL 61/79] arm/arm64: KVM: Use __hyp_reset_vectors() directly Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 63/79] arm/arm64: KVM: Use HVC_RESET_VECTORS to reinit HYP mode Christoffer Dall
` (17 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
With __cpu_reset_hyp_mode having become fairly dumb, there is no
need for kvm_get_idmap_start anymore.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 1 -
arch/arm/kvm/mmu.c | 5 -----
arch/arm64/include/asm/kvm_mmu.h | 1 -
3 files changed, 7 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 95f38dc..fa6f217 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -56,7 +56,6 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void);
phys_addr_t kvm_get_idmap_vector(void);
-phys_addr_t kvm_get_idmap_start(void);
int kvm_mmu_init(void);
void kvm_clear_hyp_idmap(void);
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 69554bd..efb4335 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1669,11 +1669,6 @@ phys_addr_t kvm_get_idmap_vector(void)
return hyp_idmap_vector;
}
-phys_addr_t kvm_get_idmap_start(void)
-{
- return hyp_idmap_start;
-}
-
static int kvm_map_idmap_text(pgd_t *pgd)
{
int err;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index ed12460..91d93a5 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -155,7 +155,6 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void);
phys_addr_t kvm_get_idmap_vector(void);
-phys_addr_t kvm_get_idmap_start(void);
int kvm_mmu_init(void);
void kvm_clear_hyp_idmap(void);
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 63/79] arm/arm64: KVM: Use HVC_RESET_VECTORS to reinit HYP mode
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (61 preceding siblings ...)
2017-04-23 17:09 ` [PULL 62/79] arm/arm64: KVM: Remove kvm_get_idmap_start Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 64/79] ARM: decompressor: Remove __hyp_get_vectors usage Christoffer Dall
` (16 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Instead of trying to compare the value given by __hyp_get_vectors(),
which doesn't offer any real guarantee to be the stub's address, use
HVC_RESET_VECTORS to make sure we're in a sane state to reinstall
KVM across PM events.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kvm/arm.c | 24 +++++++++---------------
1 file changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index c8f4fa6..c378502 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -53,7 +53,6 @@ __asm__(".arch_extension virt");
static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
static kvm_cpu_context_t __percpu *kvm_host_cpu_state;
-static unsigned long hyp_default_vectors;
/* Per-CPU variable containing the currently running vcpu. */
static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu);
@@ -1113,8 +1112,16 @@ static void cpu_init_hyp_mode(void *dummy)
kvm_arm_init_debug();
}
+static void cpu_hyp_reset(void)
+{
+ if (!is_kernel_in_hyp_mode())
+ __hyp_reset_vectors();
+}
+
static void cpu_hyp_reinit(void)
{
+ cpu_hyp_reset();
+
if (is_kernel_in_hyp_mode()) {
/*
* __cpu_init_stage2() is safe to call even if the PM
@@ -1122,17 +1129,10 @@ static void cpu_hyp_reinit(void)
*/
__cpu_init_stage2();
} else {
- if (__hyp_get_vectors() == hyp_default_vectors)
- cpu_init_hyp_mode(NULL);
+ cpu_init_hyp_mode(NULL);
}
}
-static void cpu_hyp_reset(void)
-{
- if (!is_kernel_in_hyp_mode())
- __hyp_reset_vectors();
-}
-
static void _kvm_arch_hardware_enable(void *discard)
{
if (!__this_cpu_read(kvm_arm_hardware_enabled)) {
@@ -1316,12 +1316,6 @@ static int init_hyp_mode(void)
goto out_err;
/*
- * It is probably enough to obtain the default on one
- * CPU. It's unlikely to be different on the others.
- */
- hyp_default_vectors = __hyp_get_vectors();
-
- /*
* Allocate stack pages for Hypervisor-mode
*/
for_each_possible_cpu(cpu) {
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 64/79] ARM: decompressor: Remove __hyp_get_vectors usage
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (62 preceding siblings ...)
2017-04-23 17:09 ` [PULL 63/79] arm/arm64: KVM: Use HVC_RESET_VECTORS to reinit HYP mode Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 65/79] ARM: hyp-stub/KVM: Kill __hyp_get_vectors Christoffer Dall
` (15 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
When the compressed image needs to be relocated to avoid being
overwritten by the decompression process, we need to relocate
the hyp vectors as well so that we can find them once the
decompression has taken effect.
For that, we perform the following calculation:
u32 v = __hyp_get_vectors();
v += offset;
__hyp_set_vectors(v);
But we're guaranteed that the initial value of v as returned by
__hyp_get_vectors is always __hyp_stub_vectors, because we have
just set it by calling __hyp_stub_install.
So let's remove the use of __hyp_get_vectors, and directly use
__hyp_stub_vectors instead.
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/boot/compressed/head.S | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
index 9150f97..7c711ba 100644
--- a/arch/arm/boot/compressed/head.S
+++ b/arch/arm/boot/compressed/head.S
@@ -422,7 +422,17 @@ dtb_check_done:
cmp r0, #HYP_MODE
bne 1f
- bl __hyp_get_vectors
+ /*
+ * Compute the address of the hyp vectors after relocation.
+ * This requires some arithmetic since we cannot directly
+ * reference __hyp_stub_vectors in a PC-relative way.
+ * Call __hyp_set_vectors with the new address so that we
+ * can HVC again after the copy.
+ */
+0: adr r0, 0b
+ movw r1, #:lower16:__hyp_stub_vectors - 0b
+ movt r1, #:upper16:__hyp_stub_vectors - 0b
+ add r0, r0, r1
sub r0, r0, r5
add r0, r0, r10
bl __hyp_set_vectors
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 65/79] ARM: hyp-stub/KVM: Kill __hyp_get_vectors
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (63 preceding siblings ...)
2017-04-23 17:09 ` [PULL 64/79] ARM: decompressor: Remove __hyp_get_vectors usage Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 66/79] arm64: " Christoffer Dall
` (14 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Nobody is using __hyp_get_vectors anymore, so let's remove both
implementations (hyp-stub and KVM).
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/include/asm/virt.h | 10 ++++------
arch/arm/kernel/hyp-stub.S | 13 +------------
arch/arm/kvm/init.S | 7 +------
arch/arm/kvm/interrupts.S | 4 ----
4 files changed, 6 insertions(+), 28 deletions(-)
diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index 663adc0..141144f 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -53,7 +53,6 @@ static inline void sync_boot_mode(void)
}
void __hyp_set_vectors(unsigned long phys_vector_base);
-unsigned long __hyp_get_vectors(void);
void __hyp_reset_vectors(void);
#else
#define __boot_cpu_mode (SVC_MODE)
@@ -99,12 +98,11 @@ extern char __hyp_text_end[];
/* Only assembly code should need those */
-#define HVC_GET_VECTORS 0
-#define HVC_SET_VECTORS 1
-#define HVC_SOFT_RESTART 2
-#define HVC_RESET_VECTORS 3
+#define HVC_SET_VECTORS 0
+#define HVC_SOFT_RESTART 1
+#define HVC_RESET_VECTORS 2
-#define HVC_STUB_HCALL_NR 4
+#define HVC_STUB_HCALL_NR 3
#endif /* __ASSEMBLY__ */
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index 675c50f..918c64f 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -202,12 +202,7 @@ ARM_BE8(orr r7, r7, #(1 << 25)) @ HSCTLR.EE
ENDPROC(__hyp_stub_install_secondary)
__hyp_stub_do_trap:
- teq r0, #HVC_GET_VECTORS
- bne 1f
- mrc p15, 4, r0, c12, c0, 0 @ get HVBAR
- b __hyp_stub_exit
-
-1: teq r0, #HVC_SET_VECTORS
+ teq r0, #HVC_SET_VECTORS
bne 1f
mcr p15, 4, r1, c12, c0, 0 @ set HVBAR
b __hyp_stub_exit
@@ -247,12 +242,6 @@ ENDPROC(__hyp_stub_do_trap)
* so you will need to set that to something sensible@the new hypervisor's
* initialisation entry point.
*/
-ENTRY(__hyp_get_vectors)
- mov r0, #HVC_GET_VECTORS
- __HVC(0)
- ret lr
-ENDPROC(__hyp_get_vectors)
-
ENTRY(__hyp_set_vectors)
mov r1, r0
mov r0, #HVC_SET_VECTORS
diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S
index e53360d..87bcd7a 100644
--- a/arch/arm/kvm/init.S
+++ b/arch/arm/kvm/init.S
@@ -122,12 +122,7 @@ __do_hyp_init:
eret
ENTRY(__kvm_handle_stub_hvc)
- cmp r0, #HVC_GET_VECTORS
- bne 1f
- mrc p15, 4, r0, c12, c0, 0 @ get HVBAR
- b exit
-
-1: cmp r0, #HVC_SOFT_RESTART
+ cmp r0, #HVC_SOFT_RESTART
bne 1f
/* The target is expected in r1 */
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index b1bd316..80a1d6c 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -37,10 +37,6 @@
* in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
* passed in r0 (strictly 32bit).
*
- * A function pointer with a value of 0xffffffff has a special meaning,
- * and is used to implement __hyp_get_vectors in the same way as in
- * arch/arm/kernel/hyp_stub.S.
- *
* The calling convention follows the standard AAPCS:
* r0 - r3: caller save
* r12: caller save
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 66/79] arm64: hyp-stub/KVM: Kill __hyp_get_vectors
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (64 preceding siblings ...)
2017-04-23 17:09 ` [PULL 65/79] ARM: hyp-stub/KVM: Kill __hyp_get_vectors Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 67/79] arm64: hyp-stub: Zero x0 on successful stub handling Christoffer Dall
` (13 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Nobody is using __hyp_get_vectors anymore, so let's remove both
implementations (hyp-stub and KVM).
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/include/asm/virt.h | 12 ++++--------
arch/arm64/kernel/hyp-stub.S | 13 +------------
arch/arm64/kvm/hyp-init.S | 7 +------
arch/arm64/kvm/hyp.S | 2 +-
4 files changed, 7 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 435514c..c5f8944 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -29,28 +29,25 @@
* indirection of a function call (as implemented in hyp-stub.S).
*/
-/* HVC_GET_VECTORS - Return the value of the vbar_el2 register. */
-#define HVC_GET_VECTORS 0
-
/*
* HVC_SET_VECTORS - Set the value of the vbar_el2 register.
*
* @x1: Physical address of the new vector table.
*/
-#define HVC_SET_VECTORS 1
+#define HVC_SET_VECTORS 0
/*
* HVC_SOFT_RESTART - CPU soft reset, used by the cpu_soft_restart routine.
*/
-#define HVC_SOFT_RESTART 2
+#define HVC_SOFT_RESTART 1
/*
* HVC_RESET_VECTORS - Restore the vectors to the original HYP stubs
*/
-#define HVC_RESET_VECTORS 3
+#define HVC_RESET_VECTORS 2
/* Max number of HYP stub hypercalls */
-#define HVC_STUB_HCALL_NR 4
+#define HVC_STUB_HCALL_NR 3
/* Error returned when an invalid stub number is passed into x0 */
#define HVC_STUB_ERR 0xbadca11
@@ -77,7 +74,6 @@
extern u32 __boot_cpu_mode[2];
void __hyp_set_vectors(phys_addr_t phys_vector_base);
-phys_addr_t __hyp_get_vectors(void);
void __hyp_reset_vectors(void);
/* Reports the availability of HYP mode */
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 8226fd9..d55604d 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -55,12 +55,7 @@ ENDPROC(__hyp_stub_vectors)
.align 11
el1_sync:
- cmp x0, #HVC_GET_VECTORS
- b.ne 1f
- mrs x0, vbar_el2
- b 9f
-
-1: cmp x0, #HVC_SET_VECTORS
+ cmp x0, #HVC_SET_VECTORS
b.ne 2f
msr vbar_el2, x1
b 9f
@@ -118,12 +113,6 @@ ENDPROC(\label)
* initialisation entry point.
*/
-ENTRY(__hyp_get_vectors)
- mov x0, #HVC_GET_VECTORS
- hvc #0
- ret
-ENDPROC(__hyp_get_vectors)
-
ENTRY(__hyp_set_vectors)
mov x1, x0
mov x0, #HVC_SET_VECTORS
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 0ad34fd..3734e63 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -124,12 +124,7 @@ __do_hyp_init:
ENDPROC(__kvm_hyp_init)
ENTRY(__kvm_handle_stub_hvc)
- cmp x0, #HVC_GET_VECTORS
- b.ne 1f
- mrs x0, vbar_el2
- b exit
-
-1: cmp x0, #HVC_SOFT_RESTART
+ cmp x0, #HVC_SOFT_RESTART
b.ne 1f
/* This is where we're about to jump, staying at EL2 */
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index f6f20b5..952f6cb 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -36,7 +36,7 @@
* passed in x0.
*
* A function pointer with a value less than 0xfff has a special meaning,
- * and is used to implement __hyp_get_vectors in the same way as in
+ * and is used to implement hyp stubs in the same way as in
* arch/arm64/kernel/hyp_stub.S.
*/
ENTRY(__kvm_call_hyp)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 67/79] arm64: hyp-stub: Zero x0 on successful stub handling
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (65 preceding siblings ...)
2017-04-23 17:09 ` [PULL 66/79] arm64: " Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 68/79] ARM: hyp-stub: Zero r0 " Christoffer Dall
` (12 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
We now return HVC_STUB_ERR when a stub hypercall fails, but we
leave whatever was in x0 on success. Zeroing it on return seems
like a good idea.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm64/kernel/hyp-stub.S | 4 +++-
arch/arm64/kvm/hyp-init.S | 6 +++---
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index d55604d..e1261fb 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -73,8 +73,10 @@ el1_sync:
/* Someone called kvm_call_hyp() against the hyp-stub... */
ldr x0, =HVC_STUB_ERR
+ eret
-9: eret
+9: mov x0, xzr
+ eret
ENDPROC(el1_sync)
.macro invalid_vector label
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 3734e63..839425c 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -154,13 +154,13 @@ reset:
/* Install stub vectors */
adr_l x5, __hyp_stub_vectors
msr vbar_el2, x5
- b exit
+ mov x0, xzr
+ eret
1: /* Bad stub call */
ldr x0, =HVC_STUB_ERR
-
-exit:
eret
+
ENDPROC(__kvm_handle_stub_hvc)
.ltorg
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 68/79] ARM: hyp-stub: Zero r0 on successful stub handling
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (66 preceding siblings ...)
2017-04-23 17:09 ` [PULL 67/79] arm64: hyp-stub: Zero x0 on successful stub handling Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 69/79] arm/arm64: Add hyp-stub API documentation Christoffer Dall
` (11 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
We now return HVC_STUB_ERR when a stub hypercall fails, but we
leave whatever was in r0 on success. Zeroing it on return seems
like a good idea.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kernel/hyp-stub.S | 2 ++
arch/arm/kvm/init.S | 2 ++
2 files changed, 4 insertions(+)
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index 918c64f..d8523cc 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -215,8 +215,10 @@ __hyp_stub_do_trap:
beq __hyp_stub_exit
ldr r0, =HVC_STUB_ERR
+ __ERET
__hyp_stub_exit:
+ mov r0, #0
__ERET
ENDPROC(__hyp_stub_do_trap)
diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S
index 87bcd7a..570ed4a 100644
--- a/arch/arm/kvm/init.S
+++ b/arch/arm/kvm/init.S
@@ -155,8 +155,10 @@ reset:
b exit
1: ldr r0, =HVC_STUB_ERR
+ eret
exit:
+ mov r0, #0
eret
ENDPROC(__kvm_handle_stub_hvc)
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 69/79] arm/arm64: Add hyp-stub API documentation
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (67 preceding siblings ...)
2017-04-23 17:09 ` [PULL 68/79] ARM: hyp-stub: Zero r0 " Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 70/79] KVM: arm/arm64: Cleanup the arch timer code's irqchip checking Christoffer Dall
` (10 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
In order to help people understanding the hyp-stub API that exists
between the host kernel and the hypervisor mode (whether a hypervisor
has been installed or not), let's document said API.
As with any form of documentation, I expect it to become obsolete
and completely misleading within 20 minutes after having being merged.
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
Documentation/virtual/kvm/arm/hyp-abi.txt | 53 +++++++++++++++++++++++++++++++
1 file changed, 53 insertions(+)
create mode 100644 Documentation/virtual/kvm/arm/hyp-abi.txt
diff --git a/Documentation/virtual/kvm/arm/hyp-abi.txt b/Documentation/virtual/kvm/arm/hyp-abi.txt
new file mode 100644
index 0000000..a20a0be
--- /dev/null
+++ b/Documentation/virtual/kvm/arm/hyp-abi.txt
@@ -0,0 +1,53 @@
+* Internal ABI between the kernel and HYP
+
+This file documents the interaction between the Linux kernel and the
+hypervisor layer when running Linux as a hypervisor (for example
+KVM). It doesn't cover the interaction of the kernel with the
+hypervisor when running as a guest (under Xen, KVM or any other
+hypervisor), or any hypervisor-specific interaction when the kernel is
+used as a host.
+
+On arm and arm64 (without VHE), the kernel doesn't run in hypervisor
+mode, but still needs to interact with it, allowing a built-in
+hypervisor to be either installed or torn down.
+
+In order to achieve this, the kernel must be booted at HYP (arm) or
+EL2 (arm64), allowing it to install a set of stubs before dropping to
+SVC/EL1. These stubs are accessible by using a 'hvc #0' instruction,
+and only act on individual CPUs.
+
+Unless specified otherwise, any built-in hypervisor must implement
+these functions (see arch/arm{,64}/include/asm/virt.h):
+
+* r0/x0 = HVC_SET_VECTORS
+ r1/x1 = vectors
+
+ Set HVBAR/VBAR_EL2 to 'vectors' to enable a hypervisor. 'vectors'
+ must be a physical address, and respect the alignment requirements
+ of the architecture. Only implemented by the initial stubs, not by
+ Linux hypervisors.
+
+* r0/x0 = HVC_RESET_VECTORS
+
+ Turn HYP/EL2 MMU off, and reset HVBAR/VBAR_EL2 to the initials
+ stubs' exception vector value. This effectively disables an existing
+ hypervisor.
+
+* r0/x0 = HVC_SOFT_RESTART
+ r1/x1 = restart address
+ x2 = x0's value when entering the next payload (arm64)
+ x3 = x1's value when entering the next payload (arm64)
+ x4 = x2's value when entering the next payload (arm64)
+
+ Mask all exceptions, disable the MMU, move the arguments into place
+ (arm64 only), and jump to the restart address while at HYP/EL2. This
+ hypercall is not expected to return to its caller.
+
+Any other value of r0/x0 triggers a hypervisor-specific handling,
+which is not documented here.
+
+The return value of a stub hypercall is held by r0/x0, and is 0 on
+success, and HVC_STUB_ERR on error. A stub hypercall is allowed to
+clobber any of the caller-saved registers (x0-x18 on arm64, r0-r3 and
+ip on arm). It is thus recommended to use a function call to perform
+the hypercall.
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 70/79] KVM: arm/arm64: Cleanup the arch timer code's irqchip checking
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (68 preceding siblings ...)
2017-04-23 17:09 ` [PULL 69/79] arm/arm64: Add hyp-stub API documentation Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 71/79] KVM: arm/arm64: Add ARM user space interrupt signaling ABI Christoffer Dall
` (9 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Christoffer Dall <christoffer.dall@linaro.org>
Currently we check if we have an in-kernel irqchip and if the vgic was
properly implemented several places in the arch timer code. But, we
already predicate our enablement of the arm timers on having a valid
and initialized gic, so we can simply check if the timers are enabled or
not.
This also gets rid of the ugly "error that's not an error but used to
signal that the timer shouldn't poke the gic" construct we have.
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
virt/kvm/arm/arch_timer.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 35d7100..363f0d2 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -189,8 +189,6 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
{
int ret;
- BUG_ON(!vgic_initialized(vcpu->kvm));
-
timer_ctx->active_cleared_last = false;
timer_ctx->irq.level = new_level;
trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
@@ -205,7 +203,7 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
* Check if there was a change in the timer state (should we raise or lower
* the line level to the GIC).
*/
-static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
+static void kvm_timer_update_state(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
@@ -217,16 +215,14 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
* because the guest would never see the interrupt. Instead wait
* until we call this function from kvm_timer_flush_hwstate.
*/
- if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
- return -ENODEV;
+ if (!timer->enabled)
+ return;
if (kvm_timer_should_fire(vtimer) != vtimer->irq.level)
kvm_timer_update_irq(vcpu, !vtimer->irq.level, vtimer);
if (kvm_timer_should_fire(ptimer) != ptimer->irq.level)
kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
-
- return 0;
}
/* Schedule the background timer for the emulated timer. */
@@ -295,13 +291,16 @@ void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
*/
void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
{
+ struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
bool phys_active;
int ret;
- if (kvm_timer_update_state(vcpu))
+ if (unlikely(!timer->enabled))
return;
+ kvm_timer_update_state(vcpu);
+
/* Set the background timer for the physical timer emulation. */
kvm_timer_emulate(vcpu, vcpu_ptimer(vcpu));
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 71/79] KVM: arm/arm64: Add ARM user space interrupt signaling ABI
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (69 preceding siblings ...)
2017-04-23 17:09 ` [PULL 70/79] KVM: arm/arm64: Cleanup the arch timer code's irqchip checking Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 72/79] KVM: arm/arm64: Support arch timers with a userspace gic Christoffer Dall
` (8 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Alexander Graf <agraf@suse.de>
We have 2 modes for dealing with interrupts in the ARM world. We can
either handle them all using hardware acceleration through the vgic or
we can emulate a gic in user space and only drive CPU IRQ pins from
there.
Unfortunately, when driving IRQs from user space, we never tell user
space about events from devices emulated inside the kernel, which may
result in interrupt line state changes, so we lose out on for example
timer and PMU events if we run with user space gic emulation.
Define an ABI to publish such device output levels to userspace.
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
Documentation/virtual/kvm/api.txt | 42 +++++++++++++++++++++++++++++++++++++++
arch/arm/include/uapi/asm/kvm.h | 2 ++
arch/arm64/include/uapi/asm/kvm.h | 2 ++
include/uapi/linux/kvm.h | 8 ++++++++
4 files changed, 54 insertions(+)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 3c248f7..3b4e76e 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -4147,3 +4147,45 @@ This capability, if KVM_CHECK_EXTENSION indicates that it is
available, means that that the kernel can support guests using the
hashed page table MMU defined in Power ISA V3.00 (as implemented in
the POWER9 processor), including in-memory segment tables.
+
+
+8.5 KVM_CAP_ARM_USER_IRQ
+
+Architectures: arm, arm64
+This capability, if KVM_CHECK_EXTENSION indicates that it is available, means
+that if userspace creates a VM without an in-kernel interrupt controller, it
+will be notified of changes to the output level of in-kernel emulated devices,
+which can generate virtual interrupts, presented to the VM.
+For such VMs, on every return to userspace, the kernel
+updates the vcpu's run->s.regs.device_irq_level field to represent the actual
+output level of the device.
+
+Whenever kvm detects a change in the device output level, kvm guarantees at
+least one return to userspace before running the VM. This exit could either
+be a KVM_EXIT_INTR or any other exit event, like KVM_EXIT_MMIO. This way,
+userspace can always sample the device output level and re-compute the state of
+the userspace interrupt controller. Userspace should always check the state
+of run->s.regs.device_irq_level on every kvm exit.
+The value in run->s.regs.device_irq_level can represent both level and edge
+triggered interrupt signals, depending on the device. Edge triggered interrupt
+signals will exit to userspace with the bit in run->s.regs.device_irq_level
+set exactly once per edge signal.
+
+The field run->s.regs.device_irq_level is available independent of
+run->kvm_valid_regs or run->kvm_dirty_regs bits.
+
+If KVM_CAP_ARM_USER_IRQ is supported, the KVM_CHECK_EXTENSION ioctl returns a
+number larger than 0 indicating the version of this capability is implemented
+and thereby which bits in in run->s.regs.device_irq_level can signal values.
+
+Currently the following bits are defined for the device_irq_level bitmap:
+
+ KVM_CAP_ARM_USER_IRQ >= 1:
+
+ KVM_ARM_DEV_EL1_VTIMER - EL1 virtual timer
+ KVM_ARM_DEV_EL1_PTIMER - EL1 physical timer
+ KVM_ARM_DEV_PMU - ARM PMU overflow interrupt signal
+
+Future versions of kvm may implement additional events. These will get
+indicated by returning a higher number from KVM_CHECK_EXTENSION and will be
+listed above.
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index 6ebd3e6..a5838d6 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -114,6 +114,8 @@ struct kvm_debug_exit_arch {
};
struct kvm_sync_regs {
+ /* Used with KVM_CAP_ARM_USER_IRQ */
+ __u64 device_irq_level;
};
struct kvm_arch_memory_slot {
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index c286035..cd6bea4 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -143,6 +143,8 @@ struct kvm_debug_exit_arch {
#define KVM_GUESTDBG_USE_HW (1 << 17)
struct kvm_sync_regs {
+ /* Used with KVM_CAP_ARM_USER_IRQ */
+ __u64 device_irq_level;
};
struct kvm_arch_memory_slot {
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index f51d508..6d6b9b2 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -883,6 +883,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_PPC_MMU_RADIX 134
#define KVM_CAP_PPC_MMU_HASH_V3 135
#define KVM_CAP_IMMEDIATE_EXIT 136
+#define KVM_CAP_ARM_USER_IRQ 137
#ifdef KVM_CAP_IRQ_ROUTING
@@ -1354,4 +1355,11 @@ struct kvm_assigned_msix_entry {
#define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0)
#define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1)
+/* Available with KVM_CAP_ARM_USER_IRQ */
+
+/* Bits for run->s.regs.device_irq_level */
+#define KVM_ARM_DEV_EL1_VTIMER (1 << 0)
+#define KVM_ARM_DEV_EL1_PTIMER (1 << 1)
+#define KVM_ARM_DEV_PMU (1 << 2)
+
#endif /* __LINUX_KVM_H */
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 72/79] KVM: arm/arm64: Support arch timers with a userspace gic
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (70 preceding siblings ...)
2017-04-23 17:09 ` [PULL 71/79] KVM: arm/arm64: Add ARM user space interrupt signaling ABI Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 73/79] KVM: arm/arm64: Report PMU overflow interrupts to userspace irqchip Christoffer Dall
` (7 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Alexander Graf <agraf@suse.de>
If you're running with a userspace gic or other interrupt controller
(that is no vgic in the kernel), then you have so far not been able to
use the architected timers, because the output of the architected
timers, which are driven inside the kernel, was a kernel-only construct
between the arch timer code and the vgic.
This patch implements the new KVM_CAP_ARM_USER_IRQ feature, where we use a
side channel on the kvm_run structure, run->s.regs.device_irq_level, to
always notify userspace of the timer output levels when using a userspace
irqchip.
This works by ensuring that before we enter the guest, if the timer
output level has changed compared to what we last told userspace, we
don't enter the guest, but instead return to userspace to notify it of
the new level. If we are exiting, because of an MMIO for example, and
the level changed at the same time, the value is also updated and
userspace can sample the line as it needs. This is nicely achieved
simply always updating the timer_irq_level field after the main run
loop.
Note that the kvm_timer_update_irq trace event is changed to show the
host IRQ number for the timer instead of the guest IRQ number, because
the kernel no longer know which IRQ userspace wires up the timer signal
to.
Also note that this patch implements all required functionality but does
not yet advertise the capability.
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
arch/arm/kvm/arm.c | 18 +++----
include/kvm/arm_arch_timer.h | 2 +
virt/kvm/arm/arch_timer.c | 122 +++++++++++++++++++++++++++++++++++--------
3 files changed, 110 insertions(+), 32 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index c378502..ac6e57b 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -515,13 +515,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
return ret;
}
- /*
- * Enable the arch timers only if we have an in-kernel VGIC
- * and it has been properly initialized, since we cannot handle
- * interrupts from the virtual timer with a userspace gic.
- */
- if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
- ret = kvm_timer_enable(vcpu);
+ ret = kvm_timer_enable(vcpu);
return ret;
}
@@ -640,9 +634,12 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
local_irq_disable();
/*
- * Re-check atomic conditions
+ * If we have a singal pending, or need to notify a userspace
+ * irqchip about timer level changes, then we exit (and update
+ * the timer level state in kvm_timer_update_run below).
*/
- if (signal_pending(current)) {
+ if (signal_pending(current) ||
+ kvm_timer_should_notify_user(vcpu)) {
ret = -EINTR;
run->exit_reason = KVM_EXIT_INTR;
}
@@ -714,6 +711,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
ret = handle_exit(vcpu, run, ret);
}
+ /* Tell userspace about in-kernel device output levels */
+ kvm_timer_update_run(vcpu);
+
if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &sigsaved, NULL);
return ret;
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index fe797d6..295584f 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -63,6 +63,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
+bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu);
+void kvm_timer_update_run(struct kvm_vcpu *vcpu);
void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu);
u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 363f0d2..5dc2167 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -184,6 +184,27 @@ bool kvm_timer_should_fire(struct arch_timer_context *timer_ctx)
return cval <= now;
}
+/*
+ * Reflect the timer output level into the kvm_run structure
+ */
+void kvm_timer_update_run(struct kvm_vcpu *vcpu)
+{
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ struct kvm_sync_regs *regs = &vcpu->run->s.regs;
+
+ if (likely(irqchip_in_kernel(vcpu->kvm)))
+ return;
+
+ /* Populate the device bitmap with the timer states */
+ regs->device_irq_level &= ~(KVM_ARM_DEV_EL1_VTIMER |
+ KVM_ARM_DEV_EL1_PTIMER);
+ if (vtimer->irq.level)
+ regs->device_irq_level |= KVM_ARM_DEV_EL1_VTIMER;
+ if (ptimer->irq.level)
+ regs->device_irq_level |= KVM_ARM_DEV_EL1_PTIMER;
+}
+
static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
struct arch_timer_context *timer_ctx)
{
@@ -194,9 +215,12 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
timer_ctx->irq.level);
- ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, timer_ctx->irq.irq,
- timer_ctx->irq.level);
- WARN_ON(ret);
+ if (likely(irqchip_in_kernel(vcpu->kvm))) {
+ ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
+ timer_ctx->irq.irq,
+ timer_ctx->irq.level);
+ WARN_ON(ret);
+ }
}
/*
@@ -215,7 +239,7 @@ static void kvm_timer_update_state(struct kvm_vcpu *vcpu)
* because the guest would never see the interrupt. Instead wait
* until we call this function from kvm_timer_flush_hwstate.
*/
- if (!timer->enabled)
+ if (unlikely(!timer->enabled))
return;
if (kvm_timer_should_fire(vtimer) != vtimer->irq.level)
@@ -282,28 +306,12 @@ void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
timer_disarm(timer);
}
-/**
- * kvm_timer_flush_hwstate - prepare to move the virt timer to the cpu
- * @vcpu: The vcpu pointer
- *
- * Check if the virtual timer has expired while we were running in the host,
- * and inject an interrupt if that was the case.
- */
-void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
+static void kvm_timer_flush_hwstate_vgic(struct kvm_vcpu *vcpu)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
bool phys_active;
int ret;
- if (unlikely(!timer->enabled))
- return;
-
- kvm_timer_update_state(vcpu);
-
- /* Set the background timer for the physical timer emulation. */
- kvm_timer_emulate(vcpu, vcpu_ptimer(vcpu));
-
/*
* If we enter the guest with the virtual input level to the VGIC
* asserted, then we have already told the VGIC what we need to, and
@@ -355,11 +363,72 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
vtimer->active_cleared_last = !phys_active;
}
+bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu)
+{
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ struct kvm_sync_regs *sregs = &vcpu->run->s.regs;
+ bool vlevel, plevel;
+
+ if (likely(irqchip_in_kernel(vcpu->kvm)))
+ return false;
+
+ vlevel = sregs->device_irq_level & KVM_ARM_DEV_EL1_VTIMER;
+ plevel = sregs->device_irq_level & KVM_ARM_DEV_EL1_PTIMER;
+
+ return vtimer->irq.level != vlevel ||
+ ptimer->irq.level != plevel;
+}
+
+static void kvm_timer_flush_hwstate_user(struct kvm_vcpu *vcpu)
+{
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+
+ /*
+ * To prevent continuously exiting from the guest, we mask the
+ * physical interrupt such that the guest can make forward progress.
+ * Once we detect the output level being deasserted, we unmask the
+ * interrupt again so that we exit from the guest when the timer
+ * fires.
+ */
+ if (vtimer->irq.level)
+ disable_percpu_irq(host_vtimer_irq);
+ else
+ enable_percpu_irq(host_vtimer_irq, 0);
+}
+
+/**
+ * kvm_timer_flush_hwstate - prepare timers before running the vcpu
+ * @vcpu: The vcpu pointer
+ *
+ * Check if the virtual timer has expired while we were running in the host,
+ * and inject an interrupt if that was the case, making sure the timer is
+ * masked or disabled on the host so that we keep executing. Also schedule a
+ * software timer for the physical timer if it is enabled.
+ */
+void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+ struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+
+ if (unlikely(!timer->enabled))
+ return;
+
+ kvm_timer_update_state(vcpu);
+
+ /* Set the background timer for the physical timer emulation. */
+ kvm_timer_emulate(vcpu, vcpu_ptimer(vcpu));
+
+ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
+ kvm_timer_flush_hwstate_user(vcpu);
+ else
+ kvm_timer_flush_hwstate_vgic(vcpu);
+}
+
/**
* kvm_timer_sync_hwstate - sync timer state from cpu
* @vcpu: The vcpu pointer
*
- * Check if the virtual timer has expired while we were running in the guest,
+ * Check if any of the timers have expired while we were running in the guest,
* and inject an interrupt if that was the case.
*/
void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
@@ -559,6 +628,13 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
if (timer->enabled)
return 0;
+ /* Without a VGIC we do not map virtual IRQs to physical IRQs */
+ if (!irqchip_in_kernel(vcpu->kvm))
+ goto no_vgic;
+
+ if (!vgic_initialized(vcpu->kvm))
+ return -ENODEV;
+
/*
* Find the physical IRQ number corresponding to the host_vtimer_irq
*/
@@ -582,8 +658,8 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
if (ret)
return ret;
+no_vgic:
timer->enabled = 1;
-
return 0;
}
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 73/79] KVM: arm/arm64: Report PMU overflow interrupts to userspace irqchip
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (71 preceding siblings ...)
2017-04-23 17:09 ` [PULL 72/79] KVM: arm/arm64: Support arch timers with a userspace gic Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 74/79] KVM: arm/arm64: Advertise support for KVM_CAP_ARM_USER_IRQ Christoffer Dall
` (6 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Christoffer Dall <christoffer.dall@linaro.org>
When not using an in-kernel VGIC, but instead emulating an interrupt
controller in userspace, we should report the PMU overflow status to
that userspace interrupt controller using the KVM_CAP_ARM_USER_IRQ
feature.
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
arch/arm/kvm/arm.c | 13 +++++++++----
include/kvm/arm_pmu.h | 7 +++++++
virt/kvm/arm/arch_timer.c | 3 ---
virt/kvm/arm/pmu.c | 39 +++++++++++++++++++++++++++++++++++----
4 files changed, 51 insertions(+), 11 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index ac6e57b..9eda293 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -635,11 +635,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
/*
* If we have a singal pending, or need to notify a userspace
- * irqchip about timer level changes, then we exit (and update
- * the timer level state in kvm_timer_update_run below).
+ * irqchip about timer or PMU level changes, then we exit (and
+ * update the timer level state in kvm_timer_update_run
+ * below).
*/
if (signal_pending(current) ||
- kvm_timer_should_notify_user(vcpu)) {
+ kvm_timer_should_notify_user(vcpu) ||
+ kvm_pmu_should_notify_user(vcpu)) {
ret = -EINTR;
run->exit_reason = KVM_EXIT_INTR;
}
@@ -712,7 +714,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
}
/* Tell userspace about in-kernel device output levels */
- kvm_timer_update_run(vcpu);
+ if (unlikely(!irqchip_in_kernel(vcpu->kvm))) {
+ kvm_timer_update_run(vcpu);
+ kvm_pmu_update_run(vcpu);
+ }
if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &sigsaved, NULL);
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 92e7e97..1ab4633 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -50,6 +50,8 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
+bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu);
+void kvm_pmu_update_run(struct kvm_vcpu *vcpu);
void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -85,6 +87,11 @@ static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
+static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu)
+{
+ return false;
+}
+static inline void kvm_pmu_update_run(struct kvm_vcpu *vcpu) {}
static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 5dc2167..5976609 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -193,9 +193,6 @@ void kvm_timer_update_run(struct kvm_vcpu *vcpu)
struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
struct kvm_sync_regs *regs = &vcpu->run->s.regs;
- if (likely(irqchip_in_kernel(vcpu->kvm)))
- return;
-
/* Populate the device bitmap with the timer states */
regs->device_irq_level &= ~(KVM_ARM_DEV_EL1_VTIMER |
KVM_ARM_DEV_EL1_PTIMER);
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 69ccce3..4b43e7f 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -230,13 +230,44 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu)
return;
overflow = !!kvm_pmu_overflow_status(vcpu);
- if (pmu->irq_level != overflow) {
- pmu->irq_level = overflow;
- kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
- pmu->irq_num, overflow);
+ if (pmu->irq_level == overflow)
+ return;
+
+ pmu->irq_level = overflow;
+
+ if (likely(irqchip_in_kernel(vcpu->kvm))) {
+ int ret;
+ ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
+ pmu->irq_num, overflow);
+ WARN_ON(ret);
}
}
+bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu)
+{
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+ struct kvm_sync_regs *sregs = &vcpu->run->s.regs;
+ bool run_level = sregs->device_irq_level & KVM_ARM_DEV_PMU;
+
+ if (likely(irqchip_in_kernel(vcpu->kvm)))
+ return false;
+
+ return pmu->irq_level != run_level;
+}
+
+/*
+ * Reflect the PMU overflow interrupt output level into the kvm_run structure
+ */
+void kvm_pmu_update_run(struct kvm_vcpu *vcpu)
+{
+ struct kvm_sync_regs *regs = &vcpu->run->s.regs;
+
+ /* Populate the timer bitmap for user space */
+ regs->device_irq_level &= ~KVM_ARM_DEV_PMU;
+ if (vcpu->arch.pmu.irq_level)
+ regs->device_irq_level |= KVM_ARM_DEV_PMU;
+}
+
/**
* kvm_pmu_flush_hwstate - flush pmu state to cpu
* @vcpu: The vcpu pointer
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 74/79] KVM: arm/arm64: Advertise support for KVM_CAP_ARM_USER_IRQ
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (72 preceding siblings ...)
2017-04-23 17:09 ` [PULL 73/79] KVM: arm/arm64: Report PMU overflow interrupts to userspace irqchip Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 75/79] KVM: arm/arm64: fix races in kvm_psci_vcpu_on Christoffer Dall
` (5 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Christoffer Dall <christoffer.dall@linaro.org>
Now that we support both timers and PMU reporting interrupts
to userspace, we can advertise this support.
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
arch/arm/kvm/arm.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9eda293..7941699 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -229,6 +229,13 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
else
r = kvm->arch.vgic.msis_require_devid;
break;
+ case KVM_CAP_ARM_USER_IRQ:
+ /*
+ * 1: EL1_VTIMER, EL1_PTIMER, and PMU.
+ * (bump this number if adding more devices)
+ */
+ r = 1;
+ break;
default:
r = kvm_arch_dev_ioctl_check_extension(kvm, ext);
break;
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 75/79] KVM: arm/arm64: fix races in kvm_psci_vcpu_on
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (73 preceding siblings ...)
2017-04-23 17:09 ` [PULL 74/79] KVM: arm/arm64: Advertise support for KVM_CAP_ARM_USER_IRQ Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 76/79] KVM: arm/arm64: vgic-v3: De-optimize VMCR save/restore when emulating a GICv2 Christoffer Dall
` (4 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Andrew Jones <drjones@redhat.com>
Fix potential races in kvm_psci_vcpu_on() by taking the kvm->lock
mutex. In general, it's a bad idea to allow more than one PSCI_CPU_ON
to process the same target VCPU at the same time. One such problem
that may arise is that one PSCI_CPU_ON could be resetting the target
vcpu, which fills the entire sys_regs array with a temporary value
including the MPIDR register, while another looks up the VCPU based
on the MPIDR value, resulting in no target VCPU found. Resolves both
races found with the kvm-unit-tests/arm/psci unit test.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Reported-by: Levente Kurusa <lkurusa@redhat.com>
Suggested-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Cc: stable at vger.kernel.org
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kvm/psci.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index c2b1315..a08d7a9 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -208,9 +208,10 @@ int kvm_psci_version(struct kvm_vcpu *vcpu)
static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
{
- int ret = 1;
+ struct kvm *kvm = vcpu->kvm;
unsigned long psci_fn = vcpu_get_reg(vcpu, 0) & ~((u32) 0);
unsigned long val;
+ int ret = 1;
switch (psci_fn) {
case PSCI_0_2_FN_PSCI_VERSION:
@@ -230,7 +231,9 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
break;
case PSCI_0_2_FN_CPU_ON:
case PSCI_0_2_FN64_CPU_ON:
+ mutex_lock(&kvm->lock);
val = kvm_psci_vcpu_on(vcpu);
+ mutex_unlock(&kvm->lock);
break;
case PSCI_0_2_FN_AFFINITY_INFO:
case PSCI_0_2_FN64_AFFINITY_INFO:
@@ -279,6 +282,7 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
{
+ struct kvm *kvm = vcpu->kvm;
unsigned long psci_fn = vcpu_get_reg(vcpu, 0) & ~((u32) 0);
unsigned long val;
@@ -288,7 +292,9 @@ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
val = PSCI_RET_SUCCESS;
break;
case KVM_PSCI_FN_CPU_ON:
+ mutex_lock(&kvm->lock);
val = kvm_psci_vcpu_on(vcpu);
+ mutex_unlock(&kvm->lock);
break;
default:
val = PSCI_RET_NOT_SUPPORTED;
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 76/79] KVM: arm/arm64: vgic-v3: De-optimize VMCR save/restore when emulating a GICv2
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (74 preceding siblings ...)
2017-04-23 17:09 ` [PULL 75/79] KVM: arm/arm64: fix races in kvm_psci_vcpu_on Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 77/79] KVM: arm/arm64: vgic-v3: Fix off-by-one LR access Christoffer Dall
` (3 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
When emulating a GICv2-on-GICv3, special care must be taken to only
save/restore VMCR_EL2 when ICC_SRE_EL1.SRE is cleared. Otherwise,
all Group-0 interrupts end-up being delivered as FIQ, which is
probably not what the guest expects, as demonstrated here with
an unhappy EFI:
FIQ Exception at 0x000000013BD21CC4
This means that we cannot perform the load/put trick when dealing
with VMCR_EL2 (because the host has SRE set), and we have to deal
with it in the world-switch.
Fortunately, this is not the most common case (modern guests should
be able to deal with GICv3 directly), and the performance is not worse
than what it was before the VMCR optimization.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
virt/kvm/arm/hyp/vgic-v3-sr.c | 8 ++++++--
virt/kvm/arm/vgic/vgic-v3.c | 11 +++++++++--
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c
index 3d0b1dd..91922c1 100644
--- a/virt/kvm/arm/hyp/vgic-v3-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v3-sr.c
@@ -128,8 +128,10 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
* Make sure stores to the GIC via the memory mapped interface
* are now visible to the system register interface.
*/
- if (!cpu_if->vgic_sre)
+ if (!cpu_if->vgic_sre) {
dsb(st);
+ cpu_if->vgic_vmcr = read_gicreg(ICH_VMCR_EL2);
+ }
if (used_lrs) {
int i;
@@ -205,11 +207,13 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
* delivered as a FIQ to the guest, with potentially fatal
* consequences. So we must make sure that ICC_SRE_EL1 has
* been actually programmed with the value we want before
- * starting to mess with the rest of the GIC.
+ * starting to mess with the rest of the GIC, and VMCR_EL2 in
+ * particular.
*/
if (!cpu_if->vgic_sre) {
write_gicreg(0, ICC_SRE_EL1);
isb();
+ write_gicreg(cpu_if->vgic_vmcr, ICH_VMCR_EL2);
}
val = read_gicreg(ICH_VTR_EL2);
diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index bc7010d..df15036 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -373,12 +373,19 @@ void vgic_v3_load(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
- kvm_call_hyp(__vgic_v3_write_vmcr, cpu_if->vgic_vmcr);
+ /*
+ * If dealing with a GICv2 emulation on GICv3, VMCR_EL2.VFIQen
+ * is dependent on ICC_SRE_EL1.SRE, and we have to perform the
+ * VMCR_EL2 save/restore in the world switch.
+ */
+ if (likely(cpu_if->vgic_sre))
+ kvm_call_hyp(__vgic_v3_write_vmcr, cpu_if->vgic_vmcr);
}
void vgic_v3_put(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
- cpu_if->vgic_vmcr = kvm_call_hyp(__vgic_v3_read_vmcr);
+ if (likely(cpu_if->vgic_sre))
+ cpu_if->vgic_vmcr = kvm_call_hyp(__vgic_v3_read_vmcr);
}
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 77/79] KVM: arm/arm64: vgic-v3: Fix off-by-one LR access
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (75 preceding siblings ...)
2017-04-23 17:09 ` [PULL 76/79] KVM: arm/arm64: vgic-v3: De-optimize VMCR save/restore when emulating a GICv2 Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 78/79] ARM: hyp-stub: Fix Thumb-2 compilation Christoffer Dall
` (2 subsequent siblings)
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
When iterating over the used LRs, be careful not to try to access
an unused LR, or even an unimplemented one if you're unlucky...
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
virt/kvm/arm/hyp/vgic-v3-sr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c
index 91922c1..bce6037 100644
--- a/virt/kvm/arm/hyp/vgic-v3-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v3-sr.c
@@ -143,7 +143,7 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
val = read_gicreg(ICH_VTR_EL2);
nr_pri_bits = vtr_to_nr_pri_bits(val);
- for (i = 0; i <= used_lrs; i++) {
+ for (i = 0; i < used_lrs; i++) {
if (cpu_if->vgic_elrsr & (1 << i))
cpu_if->vgic_lr[i] &= ~ICH_LR_STATE;
else
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 78/79] ARM: hyp-stub: Fix Thumb-2 compilation
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (76 preceding siblings ...)
2017-04-23 17:09 ` [PULL 77/79] KVM: arm/arm64: vgic-v3: Fix off-by-one LR access Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-23 17:09 ` [PULL 79/79] ARM: KVM: Fix idmap stub entry when running Thumb-2 code Christoffer Dall
2017-04-27 15:34 ` [PULL 00/79] KVM/ARM Changes for v4.12 Paolo Bonzini
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
The assembler defaults to emiting the short form of ADR, leading
to an out-of-range immediate. Using the wide version solves this
issue.
Fixes: bc845e4fbbbb ("ARM: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code")
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kernel/hyp-stub.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index d8523cc..ec7e737 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -125,7 +125,7 @@ ENTRY(__hyp_stub_install_secondary)
* (see safe_svcmode_maskall).
*/
@ Now install the hypervisor stub:
- adr r7, __hyp_stub_vectors
+ W(adr) r7, __hyp_stub_vectors
mcr p15, 4, r7, c12, c0, 0 @ set hypervisor vector base (HVBAR)
@ Disable all traps, so we don't get any nasty surprise
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 79/79] ARM: KVM: Fix idmap stub entry when running Thumb-2 code
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (77 preceding siblings ...)
2017-04-23 17:09 ` [PULL 78/79] ARM: hyp-stub: Fix Thumb-2 compilation Christoffer Dall
@ 2017-04-23 17:09 ` Christoffer Dall
2017-04-27 15:34 ` [PULL 00/79] KVM/ARM Changes for v4.12 Paolo Bonzini
79 siblings, 0 replies; 81+ messages in thread
From: Christoffer Dall @ 2017-04-23 17:09 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
When entering the hyp stub implemented in the idmap, we try to
be mindful of the fact that we could be running a Thumb-2 kernel
by adding 1 to the address we compute. Unfortunately, the assembler
also knows about this trick, and has already generated an address
that has bit 0 set in the litteral pool.
Our superfluous correction ends up confusing the CPU entierely,
as we now branch to the stub in ARM mode instead of Thumb, and on
a possibly unaligned address for good measure. From that point,
nothing really good happens.
The obvious fix in to remove this stupid target PC correction.
Fixes: 6bebcecb6c5b ("ARM: KVM: Allow the main HYP code to use the init hyp stub implementation")
Reported-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
arch/arm/kvm/hyp/hyp-entry.S | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index a35baa8..95a2fae 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -144,7 +144,6 @@ hyp_hvc:
ldr r1, [r1]
ldr ip, =__kvm_handle_stub_hvc
sub ip, ip, r1
-THUMB( add ip, ip, #1)
pop {r1}
bx ip
--
2.9.0
^ permalink raw reply related [flat|nested] 81+ messages in thread
* [PULL 00/79] KVM/ARM Changes for v4.12
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
` (78 preceding siblings ...)
2017-04-23 17:09 ` [PULL 79/79] ARM: KVM: Fix idmap stub entry when running Thumb-2 code Christoffer Dall
@ 2017-04-27 15:34 ` Paolo Bonzini
79 siblings, 0 replies; 81+ messages in thread
From: Paolo Bonzini @ 2017-04-27 15:34 UTC (permalink / raw)
To: linux-arm-kernel
On 23/04/2017 19:08, Christoffer Dall wrote:
> git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git kvm-arm-for-v4.12
Pulled, thanks.
Paolo
^ permalink raw reply [flat|nested] 81+ messages in thread
end of thread, other threads:[~2017-04-27 15:34 UTC | newest]
Thread overview: 81+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-04-23 17:08 [PULL 00/79] KVM/ARM Changes for v4.12 Christoffer Dall
2017-04-23 17:08 ` [PULL 01/79] arm64: sysreg: sort by encoding Christoffer Dall
2017-04-23 17:08 ` [PULL 02/79] arm64: sysreg: add debug system registers Christoffer Dall
2017-04-23 17:08 ` [PULL 03/79] arm64: sysreg: add performance monitor registers Christoffer Dall
2017-04-23 17:08 ` [PULL 04/79] arm64: sysreg: subsume GICv3 sysreg definitions Christoffer Dall
2017-04-23 17:08 ` [PULL 05/79] arm64: sysreg: add physical timer registers Christoffer Dall
2017-04-23 17:08 ` [PULL 06/79] arm64: sysreg: add register encodings used by KVM Christoffer Dall
2017-04-23 17:08 ` [PULL 07/79] arm64: sysreg: add Set/Way sys encodings Christoffer Dall
2017-04-23 17:08 ` [PULL 08/79] KVM: arm64: add SYS_DESC() Christoffer Dall
2017-04-23 17:08 ` [PULL 09/79] KVM: arm64: Use common debug sysreg definitions Christoffer Dall
2017-04-23 17:08 ` [PULL 10/79] KVM: arm64: Use common performance monitor " Christoffer Dall
2017-04-23 17:08 ` [PULL 11/79] KVM: arm64: Use common GICv3 " Christoffer Dall
2017-04-23 17:08 ` [PULL 12/79] KVM: arm64: Use common physical timer " Christoffer Dall
2017-04-23 17:08 ` [PULL 13/79] KVM: arm64: use common invariant " Christoffer Dall
2017-04-23 17:08 ` [PULL 14/79] KVM: arm64: Use common " Christoffer Dall
2017-04-23 17:08 ` [PULL 15/79] KVM: arm64: Use common Set/Way sys definitions Christoffer Dall
2017-04-23 17:08 ` [PULL 16/79] kvm: arm/arm64: Rework gpa callback handlers Christoffer Dall
2017-04-23 17:08 ` [PULL 17/79] KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put Christoffer Dall
2017-04-23 17:08 ` [PULL 18/79] KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no pending IRQ Christoffer Dall
2017-04-23 17:08 ` [PULL 19/79] KVM: arm/arm64: vgic: Get rid of live_lrs Christoffer Dall
2017-04-23 17:08 ` [PULL 20/79] KVM: arm/arm64: vgic: Only set underflow when actually out of LRs Christoffer Dall
2017-04-23 17:08 ` [PULL 21/79] KVM: arm/arm64: vgic: Get rid of unnecessary process_maintenance operation Christoffer Dall
2017-04-23 17:08 ` [PULL 22/79] KVM: arm/arm64: vgic: Get rid of unnecessary save_maint_int_state Christoffer Dall
2017-04-23 17:08 ` [PULL 23/79] KVM: arm/arm64: vgic: Get rid of MISR and EISR fields Christoffer Dall
2017-04-23 17:08 ` [PULL 24/79] KVM: arm/arm64: vgic: Implement early VGIC init functionality Christoffer Dall
2017-04-23 17:08 ` [PULL 25/79] KVM: arm/arm64: vgic: Don't check vgic_initialized in sync/flush Christoffer Dall
2017-04-23 17:08 ` [PULL 26/79] KVM: arm/arm64: vgic: Improve sync_hwstate performance Christoffer Dall
2017-04-23 17:08 ` [PULL 27/79] arm64: KVM: PMU: Refactor pmu_*_el0_disabled Christoffer Dall
2017-04-23 17:08 ` [PULL 28/79] arm64: KVM: PMU: Inject UNDEF exception on illegal register access Christoffer Dall
2017-04-23 17:08 ` [PULL 29/79] arm64: KVM: PMU: Inject UNDEF on non-privileged accesses Christoffer Dall
2017-04-23 17:08 ` [PULL 30/79] arm64: KVM: Make unexpected reads from WO registers inject an undef Christoffer Dall
2017-04-23 17:08 ` [PULL 31/79] arm64: KVM: PMU: Inject UNDEF on read access to PMSWINC_EL0 Christoffer Dall
2017-04-23 17:08 ` [PULL 32/79] arm64: KVM: Treat sysreg accessors returning false as successful Christoffer Dall
2017-04-23 17:08 ` [PULL 33/79] arm64: KVM: Do not corrupt registers on failed 64bit CP read Christoffer Dall
2017-04-23 17:08 ` [PULL 34/79] arm: KVM: Make unexpected register accesses inject an undef Christoffer Dall
2017-04-23 17:08 ` [PULL 35/79] arm: KVM: Treat CP15 accessors returning false as successful Christoffer Dall
2017-04-23 17:08 ` [PULL 36/79] arm64: hyp-stub: Stop pointlessly clobbering lr Christoffer Dall
2017-04-23 17:08 ` [PULL 37/79] arm64: KVM: Move lr save/restore to do_el2_call Christoffer Dall
2017-04-23 17:08 ` [PULL 38/79] arm64: hyp-stub: Don't save lr in the EL1 code Christoffer Dall
2017-04-23 17:08 ` [PULL 39/79] arm64: hyp-stub: Define a return value for failed stub calls Christoffer Dall
2017-04-23 17:08 ` [PULL 40/79] arm64: hyp-stub: Update documentation in asm/virt.h Christoffer Dall
2017-04-23 17:08 ` [PULL 41/79] arm64: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall Christoffer Dall
2017-04-23 17:08 ` [PULL 42/79] arm64: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code Christoffer Dall
2017-04-23 17:08 ` [PULL 43/79] arm64: KVM: Implement HVC_GET_VECTORS " Christoffer Dall
2017-04-23 17:08 ` [PULL 44/79] arm64: KVM: Allow the main HYP code to use the init hyp stub implementation Christoffer Dall
2017-04-23 17:08 ` [PULL 45/79] arm64: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors Christoffer Dall
2017-04-23 17:08 ` [PULL 46/79] arm64: KVM: Implement HVC_SOFT_RESTART in the init code Christoffer Dall
2017-04-23 17:08 ` [PULL 47/79] ARM: hyp-stub: improve ABI Christoffer Dall
2017-04-23 17:08 ` [PULL 48/79] ARM: soft-reboot into same mode that we entered the kernel Christoffer Dall
2017-04-23 17:08 ` [PULL 49/79] ARM: KVM: Convert KVM to use HVC_GET_VECTORS Christoffer Dall
2017-04-23 17:09 ` [PULL 50/79] ARM: Update cpu_v7_reset documentation Christoffer Dall
2017-04-23 17:09 ` [PULL 51/79] ARM: hyp-stub: Use r1 for the soft-restart address Christoffer Dall
2017-04-23 17:09 ` [PULL 52/79] ARM: Expose the VA/IDMAP offset Christoffer Dall
2017-04-23 17:09 ` [PULL 53/79] ARM: hyp-stub: Define a return value for failed stub calls Christoffer Dall
2017-04-23 17:09 ` [PULL 54/79] ARM: hyp-stub: Implement HVC_RESET_VECTORS stub hypercall Christoffer Dall
2017-04-23 17:09 ` [PULL 55/79] ARM: KVM: Implement HVC_RESET_VECTORS stub hypercall in the init code Christoffer Dall
2017-04-23 17:09 ` [PULL 56/79] ARM: KVM: Implement HVC_GET_VECTORS " Christoffer Dall
2017-04-23 17:09 ` [PULL 57/79] ARM: KVM: Allow the main HYP code to use the init hyp stub implementation Christoffer Dall
2017-04-23 17:09 ` [PULL 58/79] ARM: KVM: Convert __cpu_reset_hyp_mode to using __hyp_reset_vectors Christoffer Dall
2017-04-23 17:09 ` [PULL 59/79] ARM: KVM: Implement HVC_SOFT_RESTART in the init code Christoffer Dall
2017-04-23 17:09 ` [PULL 60/79] ARM: KVM: Gracefully handle hyp-stubs being restored from under our feet Christoffer Dall
2017-04-23 17:09 ` [PULL 61/79] arm/arm64: KVM: Use __hyp_reset_vectors() directly Christoffer Dall
2017-04-23 17:09 ` [PULL 62/79] arm/arm64: KVM: Remove kvm_get_idmap_start Christoffer Dall
2017-04-23 17:09 ` [PULL 63/79] arm/arm64: KVM: Use HVC_RESET_VECTORS to reinit HYP mode Christoffer Dall
2017-04-23 17:09 ` [PULL 64/79] ARM: decompressor: Remove __hyp_get_vectors usage Christoffer Dall
2017-04-23 17:09 ` [PULL 65/79] ARM: hyp-stub/KVM: Kill __hyp_get_vectors Christoffer Dall
2017-04-23 17:09 ` [PULL 66/79] arm64: " Christoffer Dall
2017-04-23 17:09 ` [PULL 67/79] arm64: hyp-stub: Zero x0 on successful stub handling Christoffer Dall
2017-04-23 17:09 ` [PULL 68/79] ARM: hyp-stub: Zero r0 " Christoffer Dall
2017-04-23 17:09 ` [PULL 69/79] arm/arm64: Add hyp-stub API documentation Christoffer Dall
2017-04-23 17:09 ` [PULL 70/79] KVM: arm/arm64: Cleanup the arch timer code's irqchip checking Christoffer Dall
2017-04-23 17:09 ` [PULL 71/79] KVM: arm/arm64: Add ARM user space interrupt signaling ABI Christoffer Dall
2017-04-23 17:09 ` [PULL 72/79] KVM: arm/arm64: Support arch timers with a userspace gic Christoffer Dall
2017-04-23 17:09 ` [PULL 73/79] KVM: arm/arm64: Report PMU overflow interrupts to userspace irqchip Christoffer Dall
2017-04-23 17:09 ` [PULL 74/79] KVM: arm/arm64: Advertise support for KVM_CAP_ARM_USER_IRQ Christoffer Dall
2017-04-23 17:09 ` [PULL 75/79] KVM: arm/arm64: fix races in kvm_psci_vcpu_on Christoffer Dall
2017-04-23 17:09 ` [PULL 76/79] KVM: arm/arm64: vgic-v3: De-optimize VMCR save/restore when emulating a GICv2 Christoffer Dall
2017-04-23 17:09 ` [PULL 77/79] KVM: arm/arm64: vgic-v3: Fix off-by-one LR access Christoffer Dall
2017-04-23 17:09 ` [PULL 78/79] ARM: hyp-stub: Fix Thumb-2 compilation Christoffer Dall
2017-04-23 17:09 ` [PULL 79/79] ARM: KVM: Fix idmap stub entry when running Thumb-2 code Christoffer Dall
2017-04-27 15:34 ` [PULL 00/79] KVM/ARM Changes for v4.12 Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).