* [PATCH v6 00/21] KVM: ARM64: Add guest PMU support
@ 2015-12-08 12:47 Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 01/21] ARM64: Move PMU register related defines to asm/pmu.h Shannon Zhao
` (21 more replies)
0 siblings, 22 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
This patchset adds guest PMU support for KVM on ARM64. It takes
trap-and-emulate approach. When guest wants to monitor one event, it
will be trapped by KVM and KVM will call perf_event API to create a perf
event and call relevant perf_event APIs to get the count value of event.
Use perf to test this patchset in guest. When using "perf list", it
shows the list of the hardware events and hardware cache events perf
supports. Then use "perf stat -e EVENT" to monitor some event. For
example, use "perf stat -e cycles" to count cpu cycles and
"perf stat -e cache-misses" to count cache misses.
Below are the outputs of "perf stat -r 5 sleep 5" when running in host
and guest.
Host:
Performance counter stats for 'sleep 5' (5 runs):
0.510276 task-clock (msec) # 0.000 CPUs utilized ( +- 1.57% )
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
49 page-faults # 0.096 M/sec ( +- 0.77% )
1064117 cycles # 2.085 GHz ( +- 1.56% )
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
529051 instructions # 0.50 insns per cycle ( +- 0.55% )
<not supported> branches
9894 branch-misses # 19.390 M/sec ( +- 1.70% )
5.000853900 seconds time elapsed ( +- 0.00% )
Guest:
Performance counter stats for 'sleep 5' (5 runs):
0.642456 task-clock (msec) # 0.000 CPUs utilized ( +- 1.81% )
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
49 page-faults # 0.076 M/sec ( +- 1.64% )
1322717 cycles # 2.059 GHz ( +- 1.88% )
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
640944 instructions # 0.48 insns per cycle ( +- 1.10% )
<not supported> branches
10665 branch-misses # 16.600 M/sec ( +- 2.23% )
5.001181452 seconds time elapsed ( +- 0.00% )
Have a cycle counter read test like below in guest and host:
static void test(void)
{
unsigned long count, count1, count2;
count1 = read_cycles();
count++;
count2 = read_cycles();
}
Host:
count1: 3046186213
count2: 3046186347
delta: 134
Guest:
count1: 5645797121
count2: 5645797270
delta: 149
The gap between guest and host is very small. One reason for this I
think is that it doesn't count the cycles in EL2 and host since we add
exclude_hv = 1. So the cycles spent to store/restore registers which
happens at EL2 are not included.
This patchset can be fetched from [1] and the relevant QEMU version for
test can be fetched from [2].
The results of 'perf test' can be found from [3][4].
The results of perf_event_tests test suite can be found from [5][6].
Also, I have tested "perf top" in two VMs and host at the same time. It
works well.
Thanks,
Shannon
[1] https://git.linaro.org/people/shannon.zhao/linux-mainline.git KVM_ARM64_PMU_v6
[2] https://git.linaro.org/people/shannon.zhao/qemu.git virtual_PMU
[3] http://people.linaro.org/~shannon.zhao/PMU/perf-test-host.txt
[4] http://people.linaro.org/~shannon.zhao/PMU/perf-test-guest.txt
[5] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-host.txt
[6] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-guest.txt
Changes since v5:
* Rebase on new linux kernel mainline
* Remove state duplications and drop PMOVSCLR, PMCNTENCLR, PMINTENCLR,
PMXEVCNTR, PMXEVTYPER
* Add a helper to check if vPMU is already initialized
* remove kvm_vcpu from kvm_pmc
Changes since v4:
* Rebase on new linux kernel mainline
* Drop the reset handler of CP15 registers
* Fix a compile failure on arch ARM due to lack of asm/pmu.h
* Refactor the interrupt injecting flow according to Marc's suggestion
* Check the value of PMSELR register
* Calculate the attr.disabled according to PMCR.E and PMCNTENSET/CLR
* Fix some coding style
* Document the vPMU irq range
Changes since v3:
* Rebase on new linux kernel mainline
* Use ARMV8_MAX_COUNTERS instead of 32
* Reset PMCR.E to zero.
* Trigger overflow for software increment.
* Optimize PMU interrupt inject logic.
* Add handler for E,C,P bits of PMCR
* Fix the overflow bug found by perf_event_tests
* Run 'perf test', 'perf top' and perf_event_tests test suite
* Add exclude_hv = 1 configuration to not count in EL2
Changes since v2:
* Directly use perf raw event type to create perf_event in KVM
* Add a helper vcpu_sysreg_write
* remove unrelated header file
Changes since v1:
* Use switch...case for registers access handler instead of adding
alone handler for each register
* Try to use the sys_regs to store the register value instead of adding
new variables in struct kvm_pmc
* Fix the handle of cp15 regs
* Create a new kvm device vPMU, then userspace could choose whether to
create PMU
* Fix the handle of PMU overflow interrupt
Shannon Zhao (21):
ARM64: Move PMU register related defines to asm/pmu.h
KVM: ARM64: Define PMU data structure for each vcpu
KVM: ARM64: Add offset defines for PMU registers
KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
KVM: ARM64: Add reset and access handlers for PMSELR register
KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1
register
KVM: ARM64: PMU: Add perf event map and introduce perf event creating
function
KVM: ARM64: Add access handler for PMEVTYPERn and PMCCFILTR register
KVM: ARM64: Add access handler for PMXEVTYPER register
KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register
KVM: ARM64: Add access handler for PMXEVCNTR register
KVM: ARM64: Add reset and access handlers for PMCNTENSET and
PMCNTENCLR register
KVM: ARM64: Add reset and access handlers for PMINTENSET and
PMINTENCLR register
KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR
register
KVM: ARM64: Add reset and access handlers for PMUSERENR register
KVM: ARM64: Add reset and access handlers for PMSWINC register
KVM: ARM64: Add helper to handle PMCR register bits
KVM: ARM64: Add PMU overflow interrupt routing
KVM: ARM64: Reset PMU state when resetting vcpu
KVM: ARM64: Free perf event of PMU when destroying vcpu
KVM: ARM64: Add a new kvm ARM PMU device
Documentation/virtual/kvm/devices/arm-pmu.txt | 16 +
arch/arm/kvm/arm.c | 3 +
arch/arm64/include/asm/kvm_asm.h | 45 ++-
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/include/asm/pmu.h | 66 +++
arch/arm64/include/uapi/asm/kvm.h | 3 +
arch/arm64/kernel/perf_event.c | 36 +-
arch/arm64/kvm/Kconfig | 8 +
arch/arm64/kvm/Makefile | 1 +
arch/arm64/kvm/reset.c | 3 +
arch/arm64/kvm/sys_regs.c | 552 ++++++++++++++++++++++++--
include/kvm/arm_pmu.h | 71 ++++
include/linux/kvm_host.h | 1 +
include/uapi/linux/kvm.h | 2 +
virt/kvm/arm/pmu.c | 533 +++++++++++++++++++++++++
virt/kvm/kvm_main.c | 4 +
16 files changed, 1278 insertions(+), 68 deletions(-)
create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
create mode 100644 arch/arm64/include/asm/pmu.h
create mode 100644 include/kvm/arm_pmu.h
create mode 100644 virt/kvm/arm/pmu.c
--
2.0.4
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 01/21] ARM64: Move PMU register related defines to asm/pmu.h
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu Shannon Zhao
` (20 subsequent siblings)
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
To use the ARMv8 PMU related register defines from the KVM code,
we move the relevant definitions to asm/pmu.h header file.
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/include/asm/pmu.h | 64 ++++++++++++++++++++++++++++++++++++++++++
arch/arm64/kernel/perf_event.c | 36 +-----------------------
2 files changed, 65 insertions(+), 35 deletions(-)
create mode 100644 arch/arm64/include/asm/pmu.h
diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
new file mode 100644
index 0000000..4264ea0
--- /dev/null
+++ b/arch/arm64/include/asm/pmu.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd, Shannon Zhao
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_PMU_H
+#define __ASM_PMU_H
+
+#define ARMV8_MAX_COUNTERS 32
+#define ARMV8_COUNTER_MASK (ARMV8_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMCR_E (1 << 0) /* Enable all counters */
+#define ARMV8_PMCR_P (1 << 1) /* Reset all counters */
+#define ARMV8_PMCR_C (1 << 2) /* Cycle counter reset */
+#define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMCR_X (1 << 4) /* Export to ETM */
+#define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
+#define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */
+#define ARMV8_PMCR_N_MASK 0x1f
+#define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
+
+/*
+ * PMCNTEN: counters enable reg
+ */
+#define ARMV8_CNTEN_MASK 0xffffffff /* Mask for writable bits */
+
+/*
+ * PMINTEN: counters interrupt enable reg
+ */
+#define ARMV8_INTEN_MASK 0xffffffff /* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define ARMV8_OVSR_MASK 0xffffffff /* Mask for writable bits */
+#define ARMV8_OVERFLOWED_MASK ARMV8_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define ARMV8_EVTYPE_MASK 0xc80003ff /* Mask for writable bits */
+#define ARMV8_EVTYPE_EVENT 0x3ff /* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define ARMV8_EXCLUDE_EL1 (1 << 31)
+#define ARMV8_EXCLUDE_EL0 (1 << 30)
+#define ARMV8_INCLUDE_EL2 (1 << 27)
+
+#endif /* __ASM_PMU_H */
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 5b1897e..7eca5dc 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -24,6 +24,7 @@
#include <linux/of.h>
#include <linux/perf/arm_pmu.h>
#include <linux/platform_device.h>
+#include <asm/pmu.h>
/*
* ARMv8 PMUv3 Performance Events handling code.
@@ -187,9 +188,6 @@ static const unsigned armv8_a57_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
#define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
-#define ARMV8_MAX_COUNTERS 32
-#define ARMV8_COUNTER_MASK (ARMV8_MAX_COUNTERS - 1)
-
/*
* ARMv8 low level PMU access
*/
@@ -200,38 +198,6 @@ static const unsigned armv8_a57_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
#define ARMV8_IDX_TO_COUNTER(x) \
(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E (1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P (1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C (1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X (1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
-#define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */
-#define ARMV8_PMCR_N_MASK 0x1f
-#define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#define ARMV8_OVSR_MASK 0xffffffff /* Mask for writable bits */
-#define ARMV8_OVERFLOWED_MASK ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#define ARMV8_EVTYPE_MASK 0xc80003ff /* Mask for writable bits */
-#define ARMV8_EVTYPE_EVENT 0x3ff /* Mask for EVENT bits */
-
-/*
- * Event filters for PMUv3
- */
-#define ARMV8_EXCLUDE_EL1 (1 << 31)
-#define ARMV8_EXCLUDE_EL0 (1 << 30)
-#define ARMV8_INCLUDE_EL2 (1 << 27)
-
static inline u32 armv8pmu_pmcr_read(void)
{
u32 val;
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 01/21] ARM64: Move PMU register related defines to asm/pmu.h Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 13:37 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 03/21] KVM: ARM64: Add offset defines for PMU registers Shannon Zhao
` (19 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.
Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/include/asm/kvm_host.h | 2 ++
arch/arm64/kvm/Kconfig | 8 ++++++++
include/kvm/arm_pmu.h | 40 +++++++++++++++++++++++++++++++++++++++
3 files changed, 50 insertions(+)
create mode 100644 include/kvm/arm_pmu.h
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a35ce72..42e15bb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -37,6 +37,7 @@
#include <kvm/arm_vgic.h>
#include <kvm/arm_arch_timer.h>
+#include <kvm/arm_pmu.h>
#define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
@@ -132,6 +133,7 @@ struct kvm_vcpu_arch {
/* VGIC state */
struct vgic_cpu vgic_cpu;
struct arch_timer_cpu timer_cpu;
+ struct kvm_pmu pmu;
/*
* Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a5272c0..66da9a2 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -36,6 +36,7 @@ config KVM
select HAVE_KVM_EVENTFD
select HAVE_KVM_IRQFD
select KVM_ARM_VGIC_V3
+ select KVM_ARM_PMU
---help---
Support hosting virtualized guest machines.
We don't support KVM with 16K page tables yet, due to the multiple
@@ -48,6 +49,13 @@ config KVM_ARM_HOST
---help---
Provides host support for ARM processors.
+config KVM_ARM_PMU
+ bool
+ depends on KVM_ARM_HOST && HW_PERF_EVENTS
+ ---help---
+ Adds support for a virtual Performance Monitoring Unit (PMU) in
+ virtual machines.
+
source drivers/vhost/Kconfig
endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 0000000..dea78f8
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#include <linux/perf_event.h>
+#ifdef CONFIG_KVM_ARM_PMU
+#include <asm/pmu.h>
+#endif
+
+struct kvm_pmc {
+ u8 idx;/* index into the pmu->pmc array */
+ struct perf_event *perf_event;
+ u64 bitmask;
+};
+
+struct kvm_pmu {
+#ifdef CONFIG_KVM_ARM_PMU
+ /* PMU IRQ Number per VCPU */
+ int irq_num;
+ struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
+#endif
+};
+
+#endif
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 03/21] KVM: ARM64: Add offset defines for PMU registers
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 01/21] ARM64: Move PMU register related defines to asm/pmu.h Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register Shannon Zhao
` (18 subsequent siblings)
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
We are about to trap and emulate accesses to each PMU register
individually. This adds the context offsets for the AArch64 PMU
registers and their AArch32 counterparts.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/include/asm/kvm_asm.h | 45 +++++++++++++++++++++++++++++++++++-----
1 file changed, 40 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 5e37710..fb3a2a0 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -48,12 +48,29 @@
#define MDSCR_EL1 22 /* Monitor Debug System Control Register */
#define MDCCINT_EL1 23 /* Monitor Debug Comms Channel Interrupt Enable Reg */
+/* Performance Monitors Registers */
+#define PMCR_EL0 24 /* Control Register */
+#define PMOVSSET_EL0 25 /* Overflow Flag Status Set Register */
+#define PMSELR_EL0 26 /* Event Counter Selection Register */
+#define PMCEID0_EL0 27 /* Common Event Identification Register 0 */
+#define PMCEID1_EL0 28 /* Common Event Identification Register 1 */
+#define PMEVCNTR0_EL0 29 /* Event Counter Register (0-30) */
+#define PMEVCNTR30_EL0 59
+#define PMCCNTR_EL0 60 /* Cycle Counter Register */
+#define PMEVTYPER0_EL0 61 /* Event Type Register (0-30) */
+#define PMEVTYPER30_EL0 91
+#define PMCCFILTR_EL0 92 /* Cycle Count Filter Register */
+#define PMCNTENSET_EL0 93 /* Count Enable Set Register */
+#define PMINTENSET_EL1 94 /* Interrupt Enable Set Register */
+#define PMUSERENR_EL0 95 /* User Enable Register */
+#define PMSWINC_EL0 96 /* Software Increment Register */
+
/* 32bit specific registers. Keep them at the end of the range */
-#define DACR32_EL2 24 /* Domain Access Control Register */
-#define IFSR32_EL2 25 /* Instruction Fault Status Register */
-#define FPEXC32_EL2 26 /* Floating-Point Exception Control Register */
-#define DBGVCR32_EL2 27 /* Debug Vector Catch Register */
-#define NR_SYS_REGS 28
+#define DACR32_EL2 97 /* Domain Access Control Register */
+#define IFSR32_EL2 98 /* Instruction Fault Status Register */
+#define FPEXC32_EL2 99 /* Floating-Point Exception Control Register */
+#define DBGVCR32_EL2 100 /* Debug Vector Catch Register */
+#define NR_SYS_REGS 101
/* 32bit mapping */
#define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */
@@ -75,6 +92,19 @@
#define c6_IFAR (c6_DFAR + 1) /* Instruction Fault Address Register */
#define c7_PAR (PAR_EL1 * 2) /* Physical Address Register */
#define c7_PAR_high (c7_PAR + 1) /* PAR top 32 bits */
+
+/* Performance Monitors*/
+#define c9_PMCR (PMCR_EL0 * 2)
+#define c9_PMOVSSET (PMOVSSET_EL0 * 2)
+#define c9_PMCCNTR (PMCCNTR_EL0 * 2)
+#define c9_PMSELR (PMSELR_EL0 * 2)
+#define c9_PMCEID0 (PMCEID0_EL0 * 2)
+#define c9_PMCEID1 (PMCEID1_EL0 * 2)
+#define c9_PMCNTENSET (PMCNTENSET_EL0 * 2)
+#define c9_PMINTENSET (PMINTENSET_EL1 * 2)
+#define c9_PMUSERENR (PMUSERENR_EL0 * 2)
+#define c9_PMSWINC (PMSWINC_EL0 * 2)
+
#define c10_PRRR (MAIR_EL1 * 2) /* Primary Region Remap Register */
#define c10_NMRR (c10_PRRR + 1) /* Normal Memory Remap Register */
#define c12_VBAR (VBAR_EL1 * 2) /* Vector Base Address Register */
@@ -86,6 +116,11 @@
#define c10_AMAIR1 (c10_AMAIR0 + 1)/* Aux Memory Attr Indirection Reg */
#define c14_CNTKCTL (CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
+/* Performance Monitors*/
+#define c14_PMEVCNTR0 (PMEVCNTR0_EL0 * 2)
+#define c14_PMEVTYPER0 (PMEVTYPER0_EL0 * 2)
+#define c14_PMCCFILTR (PMCCFILTR_EL0 * 2)
+
#define cp14_DBGDSCRext (MDSCR_EL1 * 2)
#define cp14_DBGBCR0 (DBGBCR0_EL1 * 2)
#define cp14_DBGBVR0 (DBGBVR0_EL1 * 2)
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (2 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 03/21] KVM: ARM64: Add offset defines for PMU registers Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register Shannon Zhao
` (17 subsequent siblings)
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Add reset handler which gets host value of PMCR_EL0 and make writable
bits architecturally UNKNOWN except PMCR.E to zero. Add a common access
handler for PMU registers which emulates writing and reading register
and add emulation for PMCR.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 95 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d2650e8..beb42f1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -33,6 +33,7 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_host.h>
#include <asm/kvm_mmu.h>
+#include <asm/pmu.h>
#include <trace/events/kvm.h>
@@ -438,6 +439,58 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
}
+static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ u64 pmcr, val;
+
+ asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
+ /* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
+ * except PMCR.E resetting to zero.
+ */
+ val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
+ & (~ARMV8_PMCR_E);
+ vcpu_sys_reg(vcpu, r->reg) = val;
+}
+
+/* PMU registers accessor. */
+static bool access_pmu_regs(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u64 val;
+
+ if (p->is_write) {
+ switch (r->reg) {
+ case PMCR_EL0: {
+ /* Only update writeable bits of PMCR */
+ val = vcpu_sys_reg(vcpu, r->reg);
+ val &= ~ARMV8_PMCR_MASK;
+ val |= p->regval & ARMV8_PMCR_MASK;
+ vcpu_sys_reg(vcpu, r->reg) = val;
+ break;
+ }
+ default:
+ vcpu_sys_reg(vcpu, r->reg) = p->regval;
+ break;
+ }
+ } else {
+ switch (r->reg) {
+ case PMCR_EL0: {
+ /* PMCR.P & PMCR.C are RAZ */
+ val = vcpu_sys_reg(vcpu, r->reg)
+ & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+ p->regval = val;
+ break;
+ }
+ default:
+ p->regval = vcpu_sys_reg(vcpu, r->reg);
+ break;
+ }
+ }
+
+ return true;
+}
+
/* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
#define DBG_BCR_BVR_WCR_WVR_EL1(n) \
/* DBGBVRn_EL1 */ \
@@ -622,7 +675,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* PMCR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
- trap_raz_wi },
+ access_pmu_regs, reset_pmcr, PMCR_EL0, },
/* PMCNTENSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
trap_raz_wi },
@@ -856,6 +909,45 @@ static const struct sys_reg_desc cp14_64_regs[] = {
{ Op1( 0), CRm( 2), .access = trap_raz_wi },
};
+/* PMU CP15 registers accessor. */
+static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u32 val;
+
+ if (p->is_write) {
+ switch (r->reg) {
+ case c9_PMCR: {
+ /* Only update writeable bits of PMCR */
+ val = vcpu_cp15(vcpu, r->reg);
+ val &= ~ARMV8_PMCR_MASK;
+ val |= p->regval & ARMV8_PMCR_MASK;
+ vcpu_cp15(vcpu, r->reg) = val;
+ break;
+ }
+ default:
+ vcpu_cp15(vcpu, r->reg) = p->regval;
+ break;
+ }
+ } else {
+ switch (r->reg) {
+ case c9_PMCR: {
+ /* PMCR.P & PMCR.C are RAZ */
+ val = vcpu_cp15(vcpu, r->reg)
+ & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+ p->regval = val;
+ break;
+ }
+ default:
+ p->regval = vcpu_cp15(vcpu, r->reg);
+ break;
+ }
+ }
+
+ return true;
+}
+
/*
* Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
* depending on the way they are accessed (as a 32bit or a 64bit
@@ -884,7 +976,8 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
/* PMU */
- { Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
+ NULL, c9_PMCR },
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (3 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register Shannon Zhao
` (16 subsequent siblings)
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
its reset handler. As it doesn't need to deal with the accessing action
specially, it uses default case to emulate writing and reading PMSELR
register.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index beb42f1..d81f7ac 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -690,7 +690,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
trap_raz_wi },
/* PMSELR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMSELR_EL0 },
/* PMCEID0_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
trap_raz_wi },
@@ -981,7 +981,8 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
- { Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
+ NULL, c9_PMSELR },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (4 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 14:23 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Shannon Zhao
` (15 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Add reset handler which gets host value of PMCEID0 or PMCEID1. Since
write action to PMCEID0 or PMCEID1 is ignored, add a new case for this.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
1 file changed, 25 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d81f7ac..1bcb2b7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -452,6 +452,19 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu_sys_reg(vcpu, r->reg) = val;
}
+static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ u64 pmceid;
+
+ if (r->reg == PMCEID0_EL0)
+ asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+ else
+ /* PMCEID1_EL0 */
+ asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+ vcpu_sys_reg(vcpu, r->reg) = pmceid;
+}
+
/* PMU registers accessor. */
static bool access_pmu_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -469,6 +482,9 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
vcpu_sys_reg(vcpu, r->reg) = val;
break;
}
+ case PMCEID0_EL0:
+ case PMCEID1_EL0:
+ return ignore_write(vcpu, p);
default:
vcpu_sys_reg(vcpu, r->reg) = p->regval;
break;
@@ -693,10 +709,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_pmu_regs, reset_unknown, PMSELR_EL0 },
/* PMCEID0_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
- trap_raz_wi },
+ access_pmu_regs, reset_pmceid, PMCEID0_EL0 },
/* PMCEID1_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
- trap_raz_wi },
+ access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
/* PMCCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
trap_raz_wi },
@@ -926,6 +942,9 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
vcpu_cp15(vcpu, r->reg) = val;
break;
}
+ case c9_PMCEID0:
+ case c9_PMCEID1:
+ return ignore_write(vcpu, p);
default:
vcpu_cp15(vcpu, r->reg) = p->regval;
break;
@@ -983,8 +1002,10 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
NULL, c9_PMSELR },
- { Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
- { Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
+ NULL, c9_PMCEID0 },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
+ NULL, c9_PMCEID1 },
{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (5 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 15:43 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 08/21] KVM: ARM64: Add access handler for PMEVTYPERn and PMCCFILTR register Shannon Zhao
` (14 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER<n>_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/include/asm/pmu.h | 2 +
arch/arm64/kvm/Makefile | 1 +
include/kvm/arm_pmu.h | 13 ++++
virt/kvm/arm/pmu.c | 138 +++++++++++++++++++++++++++++++++++++++++++
4 files changed, 154 insertions(+)
create mode 100644 virt/kvm/arm/pmu.c
diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 4264ea0..e3cb6b3 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -28,6 +28,8 @@
#define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
#define ARMV8_PMCR_X (1 << 4) /* Export to ETM */
#define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which PMCCNTR_EL0 bit generates an overflow */
+#define ARMV8_PMCR_LC (1 << 6)
#define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */
#define ARMV8_PMCR_N_MASK 0x1f
#define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 1949fe5..18d56d8 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index dea78f8..36bde48 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -37,4 +37,17 @@ struct kvm_pmu {
#endif
};
+#ifdef CONFIG_KVM_ARM_PMU
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+ u32 select_idx);
+#else
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+ return 0;
+}
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+ u32 select_idx) {}
+#endif
+
#endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
new file mode 100644
index 0000000..15babf1
--- /dev/null
+++ b/virt/kvm/arm/pmu.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/cpu.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/perf_event.h>
+#include <asm/kvm_emulate.h>
+#include <kvm/arm_pmu.h>
+
+/**
+ * kvm_pmu_get_counter_value - get PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+ u64 counter, enabled, running;
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+ struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+ if (!vcpu_mode_is_32bit(vcpu))
+ counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
+ else
+ counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
+
+ if (pmc->perf_event)
+ counter += perf_event_read_value(pmc->perf_event, &enabled,
+ &running);
+
+ return counter & pmc->bitmask;
+}
+
+static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+ if (!vcpu_mode_is_32bit(vcpu))
+ return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &
+ (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) >> select_idx);
+ else
+ return (vcpu_sys_reg(vcpu, c9_PMCR) & ARMV8_PMCR_E) &
+ (vcpu_sys_reg(vcpu, c9_PMCNTENSET) >> select_idx);
+}
+
+static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
+{
+ struct kvm_pmu *pmu;
+ struct kvm_vcpu_arch *vcpu_arch;
+
+ pmc -= pmc->idx;
+ pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
+ vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
+ return container_of(vcpu_arch, struct kvm_vcpu, arch);
+}
+
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
+{
+ struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+ u64 counter;
+
+ if (pmc->perf_event) {
+ counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+ if (!vcpu_mode_is_32bit(vcpu))
+ vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
+ else
+ vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
+
+ perf_event_release_kernel(pmc->perf_event);
+ pmc->perf_event = NULL;
+ }
+}
+
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+ u32 select_idx)
+{
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+ struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+ struct perf_event *event;
+ struct perf_event_attr attr;
+ u32 eventsel;
+ u64 counter;
+
+ kvm_pmu_stop_counter(pmc);
+ eventsel = data & ARMV8_EVTYPE_EVENT;
+
+ memset(&attr, 0, sizeof(struct perf_event_attr));
+ attr.type = PERF_TYPE_RAW;
+ attr.size = sizeof(attr);
+ attr.pinned = 1;
+ attr.disabled = kvm_pmu_counter_is_enabled(vcpu, select_idx);
+ attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
+ attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
+ attr.exclude_hv = 1; /* Don't count EL2 events */
+ attr.exclude_host = 1; /* Don't count host events */
+ attr.config = eventsel;
+
+ counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+ /* The initial sample period (overflow count) of an event. */
+ attr.sample_period = (-counter) & pmc->bitmask;
+
+ event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+ if (IS_ERR(event)) {
+ printk_once("kvm: pmu event creation failed %ld\n",
+ PTR_ERR(event));
+ return;
+ }
+
+ pmc->perf_event = event;
+}
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 08/21] KVM: ARM64: Add access handler for PMEVTYPERn and PMCCFILTR register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (6 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 16:17 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 09/21] KVM: ARM64: Add access handler for PMXEVTYPER register Shannon Zhao
` (13 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Add access handler which emulates writing and reading PMEVTYPERn or
PMCCFILTR register. When writing to PMEVTYPERn or PMCCFILTR, call
kvm_pmu_set_counter_event_type to create a perf_event for the selected
event type.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 98 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1bcb2b7..2d8bd15 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -474,6 +474,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
if (p->is_write) {
switch (r->reg) {
+ case PMEVTYPER0_EL0 ... PMCCFILTR_EL0: {
+ val = r->reg - PMEVTYPER0_EL0;
+ kvm_pmu_set_counter_event_type(vcpu, p->regval, val);
+ vcpu_sys_reg(vcpu, r->reg) = p->regval;
+ break;
+ }
case PMCR_EL0: {
/* Only update writeable bits of PMCR */
val = vcpu_sys_reg(vcpu, r->reg);
@@ -522,6 +528,13 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111), \
trap_wcr, reset_wcr, n, 0, get_wcr, set_wcr }
+/* Macro to expand the PMEVTYPERn_EL0 register */
+#define PMU_PMEVTYPER_EL0(n) \
+ /* PMEVTYPERn_EL0 */ \
+ { Op0(0b11), Op1(0b011), CRn(0b1110), \
+ CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ access_pmu_regs, reset_unknown, (PMEVTYPER0_EL0 + n), }
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -736,6 +749,42 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
NULL, reset_unknown, TPIDRRO_EL0 },
+ /* PMEVTYPERn_EL0 */
+ PMU_PMEVTYPER_EL0(0),
+ PMU_PMEVTYPER_EL0(1),
+ PMU_PMEVTYPER_EL0(2),
+ PMU_PMEVTYPER_EL0(3),
+ PMU_PMEVTYPER_EL0(4),
+ PMU_PMEVTYPER_EL0(5),
+ PMU_PMEVTYPER_EL0(6),
+ PMU_PMEVTYPER_EL0(7),
+ PMU_PMEVTYPER_EL0(8),
+ PMU_PMEVTYPER_EL0(9),
+ PMU_PMEVTYPER_EL0(10),
+ PMU_PMEVTYPER_EL0(11),
+ PMU_PMEVTYPER_EL0(12),
+ PMU_PMEVTYPER_EL0(13),
+ PMU_PMEVTYPER_EL0(14),
+ PMU_PMEVTYPER_EL0(15),
+ PMU_PMEVTYPER_EL0(16),
+ PMU_PMEVTYPER_EL0(17),
+ PMU_PMEVTYPER_EL0(18),
+ PMU_PMEVTYPER_EL0(19),
+ PMU_PMEVTYPER_EL0(20),
+ PMU_PMEVTYPER_EL0(21),
+ PMU_PMEVTYPER_EL0(22),
+ PMU_PMEVTYPER_EL0(23),
+ PMU_PMEVTYPER_EL0(24),
+ PMU_PMEVTYPER_EL0(25),
+ PMU_PMEVTYPER_EL0(26),
+ PMU_PMEVTYPER_EL0(27),
+ PMU_PMEVTYPER_EL0(28),
+ PMU_PMEVTYPER_EL0(29),
+ PMU_PMEVTYPER_EL0(30),
+ /* PMCCFILTR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
+ access_pmu_regs, reset_unknown, PMCCFILTR_EL0, },
+
/* DACR32_EL2 */
{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
NULL, reset_unknown, DACR32_EL2 },
@@ -934,6 +983,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
if (p->is_write) {
switch (r->reg) {
+ case c14_PMEVTYPER0 ... c14_PMCCFILTR: {
+ val = r->reg - c14_PMEVTYPER0;
+ kvm_pmu_set_counter_event_type(vcpu, p->regval, val);
+ vcpu_cp15(vcpu, r->reg) = p->regval;
+ break;
+ }
case c9_PMCR: {
/* Only update writeable bits of PMCR */
val = vcpu_cp15(vcpu, r->reg);
@@ -967,6 +1022,13 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
return true;
}
+/* Macro to expand the PMEVTYPERn register */
+#define PMU_PMEVTYPER(n) \
+ /* PMEVTYPERn */ \
+ { Op1(0), CRn(0b1110), \
+ CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ access_pmu_cp15_regs, NULL, (c14_PMEVTYPER0 + n), }
+
/*
* Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
* depending on the way they are accessed (as a 32bit or a 64bit
@@ -1022,6 +1084,42 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+ /* PMEVTYPERn */
+ PMU_PMEVTYPER(0),
+ PMU_PMEVTYPER(1),
+ PMU_PMEVTYPER(2),
+ PMU_PMEVTYPER(3),
+ PMU_PMEVTYPER(4),
+ PMU_PMEVTYPER(5),
+ PMU_PMEVTYPER(6),
+ PMU_PMEVTYPER(7),
+ PMU_PMEVTYPER(8),
+ PMU_PMEVTYPER(9),
+ PMU_PMEVTYPER(10),
+ PMU_PMEVTYPER(11),
+ PMU_PMEVTYPER(12),
+ PMU_PMEVTYPER(13),
+ PMU_PMEVTYPER(14),
+ PMU_PMEVTYPER(15),
+ PMU_PMEVTYPER(16),
+ PMU_PMEVTYPER(17),
+ PMU_PMEVTYPER(18),
+ PMU_PMEVTYPER(19),
+ PMU_PMEVTYPER(20),
+ PMU_PMEVTYPER(21),
+ PMU_PMEVTYPER(22),
+ PMU_PMEVTYPER(23),
+ PMU_PMEVTYPER(24),
+ PMU_PMEVTYPER(25),
+ PMU_PMEVTYPER(26),
+ PMU_PMEVTYPER(27),
+ PMU_PMEVTYPER(28),
+ PMU_PMEVTYPER(29),
+ PMU_PMEVTYPER(30),
+ /* PMCCFILTR */
+ { Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_cp15_regs,
+ NULL, c14_PMCCFILTR },
};
static const struct sys_reg_desc cp15_64_regs[] = {
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 09/21] KVM: ARM64: Add access handler for PMXEVTYPER register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (7 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 08/21] KVM: ARM64: Add access handler for PMEVTYPERn and PMCCFILTR register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register Shannon Zhao
` (12 subsequent siblings)
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Accessing PMXEVTYPER register is mapped to the PMEVTYPERn or
PMCCFILTR which is selected by PMSELR. If the value of PMSELR is valid,
call kvm_pmu_set_counter_event_type to create a perf_event.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 55 +++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 53 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2d8bd15..c116a1b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -465,6 +465,57 @@ static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu_sys_reg(vcpu, r->reg) = pmceid;
}
+static bool pmu_counter_idx_valid(u64 pmcr, u64 idx)
+{
+ u64 val;
+
+ val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
+ if (idx >= val && idx != ARMV8_COUNTER_MASK)
+ return false;
+
+ return true;
+}
+
+static bool access_pmu_pmxevtyper(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u64 pmcr, idx;
+
+ if (!vcpu_mode_is_32bit(vcpu)) {
+ pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
+ idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
+
+ if (!pmu_counter_idx_valid(pmcr, idx))
+ goto out;
+
+ if (!p->is_write) {
+ p->regval = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + idx);
+ goto out;
+ }
+
+ vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + idx) = p->regval;
+ } else {
+ pmcr = vcpu_cp15(vcpu, c9_PMCR);
+ idx = vcpu_cp15(vcpu, c9_PMSELR) & ARMV8_COUNTER_MASK;
+
+ if (!pmu_counter_idx_valid(pmcr, idx))
+ goto out;
+
+ if (!p->is_write) {
+ p->regval = vcpu_cp15(vcpu, c14_PMEVTYPER0 + idx);
+ goto out;
+ }
+
+ vcpu_cp15(vcpu, c14_PMEVTYPER0 + idx) = p->regval;
+ }
+
+ kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
+
+out:
+ return true;
+}
+
/* PMU registers accessor. */
static bool access_pmu_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -731,7 +782,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
trap_raz_wi },
/* PMXEVTYPER_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
- trap_raz_wi },
+ access_pmu_pmxevtyper },
/* PMXEVCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
trap_raz_wi },
@@ -1069,7 +1120,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
NULL, c9_PMCEID1 },
{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
- { Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_pmxevtyper },
{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (8 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 09/21] KVM: ARM64: Add access handler for PMXEVTYPER register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 16:30 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 11/21] KVM: ARM64: Add access handler for PMXEVCNTR register Shannon Zhao
` (11 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Since the reset value of PMEVCNTRn or PMCCNTR is UNKNOWN, use
reset_unknown for its reset handler. Add access handler which emulates
writing and reading PMEVCNTRn or PMCCNTR register. When reading
PMEVCNTRn or PMCCNTR, call perf_event_read_value to get the count value
of the perf event.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 107 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 105 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c116a1b..f7a73b5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -525,6 +525,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
if (p->is_write) {
switch (r->reg) {
+ case PMEVCNTR0_EL0 ... PMCCNTR_EL0: {
+ val = kvm_pmu_get_counter_value(vcpu,
+ r->reg - PMEVCNTR0_EL0);
+ vcpu_sys_reg(vcpu, r->reg) += (s64)p->regval - val;
+ break;
+ }
case PMEVTYPER0_EL0 ... PMCCFILTR_EL0: {
val = r->reg - PMEVTYPER0_EL0;
kvm_pmu_set_counter_event_type(vcpu, p->regval, val);
@@ -548,6 +554,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
}
} else {
switch (r->reg) {
+ case PMEVCNTR0_EL0 ... PMCCNTR_EL0: {
+ val = kvm_pmu_get_counter_value(vcpu,
+ r->reg - PMEVCNTR0_EL0);
+ p->regval = val;
+ break;
+ }
case PMCR_EL0: {
/* PMCR.P & PMCR.C are RAZ */
val = vcpu_sys_reg(vcpu, r->reg)
@@ -579,6 +591,13 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111), \
trap_wcr, reset_wcr, n, 0, get_wcr, set_wcr }
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n) \
+ /* PMEVCNTRn_EL0 */ \
+ { Op0(0b11), Op1(0b011), CRn(0b1110), \
+ CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ access_pmu_regs, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
/* Macro to expand the PMEVTYPERn_EL0 register */
#define PMU_PMEVTYPER_EL0(n) \
/* PMEVTYPERn_EL0 */ \
@@ -779,7 +798,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
/* PMCCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMCCNTR_EL0 },
/* PMXEVTYPER_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
access_pmu_pmxevtyper },
@@ -800,6 +819,38 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
NULL, reset_unknown, TPIDRRO_EL0 },
+ /* PMEVCNTRn_EL0 */
+ PMU_PMEVCNTR_EL0(0),
+ PMU_PMEVCNTR_EL0(1),
+ PMU_PMEVCNTR_EL0(2),
+ PMU_PMEVCNTR_EL0(3),
+ PMU_PMEVCNTR_EL0(4),
+ PMU_PMEVCNTR_EL0(5),
+ PMU_PMEVCNTR_EL0(6),
+ PMU_PMEVCNTR_EL0(7),
+ PMU_PMEVCNTR_EL0(8),
+ PMU_PMEVCNTR_EL0(9),
+ PMU_PMEVCNTR_EL0(10),
+ PMU_PMEVCNTR_EL0(11),
+ PMU_PMEVCNTR_EL0(12),
+ PMU_PMEVCNTR_EL0(13),
+ PMU_PMEVCNTR_EL0(14),
+ PMU_PMEVCNTR_EL0(15),
+ PMU_PMEVCNTR_EL0(16),
+ PMU_PMEVCNTR_EL0(17),
+ PMU_PMEVCNTR_EL0(18),
+ PMU_PMEVCNTR_EL0(19),
+ PMU_PMEVCNTR_EL0(20),
+ PMU_PMEVCNTR_EL0(21),
+ PMU_PMEVCNTR_EL0(22),
+ PMU_PMEVCNTR_EL0(23),
+ PMU_PMEVCNTR_EL0(24),
+ PMU_PMEVCNTR_EL0(25),
+ PMU_PMEVCNTR_EL0(26),
+ PMU_PMEVCNTR_EL0(27),
+ PMU_PMEVCNTR_EL0(28),
+ PMU_PMEVCNTR_EL0(29),
+ PMU_PMEVCNTR_EL0(30),
/* PMEVTYPERn_EL0 */
PMU_PMEVTYPER_EL0(0),
PMU_PMEVTYPER_EL0(1),
@@ -1034,6 +1085,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
if (p->is_write) {
switch (r->reg) {
+ case c14_PMEVCNTR0 ... c9_PMCCNTR: {
+ val = kvm_pmu_get_counter_value(vcpu,
+ r->reg - c14_PMEVCNTR0);
+ vcpu_cp15(vcpu, r->reg) += (s64)p->regval - val;
+ break;
+ }
case c14_PMEVTYPER0 ... c14_PMCCFILTR: {
val = r->reg - c14_PMEVTYPER0;
kvm_pmu_set_counter_event_type(vcpu, p->regval, val);
@@ -1057,6 +1114,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
}
} else {
switch (r->reg) {
+ case c14_PMEVCNTR0 ... c9_PMCCNTR: {
+ val = kvm_pmu_get_counter_value(vcpu,
+ r->reg - c14_PMEVCNTR0);
+ p->regval = val;
+ break;
+ }
case c9_PMCR: {
/* PMCR.P & PMCR.C are RAZ */
val = vcpu_cp15(vcpu, r->reg)
@@ -1073,6 +1136,13 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
return true;
}
+/* Macro to expand the PMEVCNTRn register */
+#define PMU_PMEVCNTR(n) \
+ /* PMEVCNTRn */ \
+ { Op1(0), CRn(0b1110), \
+ CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
+ access_pmu_cp15_regs, NULL, (c14_PMEVCNTR0 + n), }
+
/* Macro to expand the PMEVTYPERn register */
#define PMU_PMEVTYPER(n) \
/* PMEVTYPERn */ \
@@ -1119,7 +1189,8 @@ static const struct sys_reg_desc cp15_regs[] = {
NULL, c9_PMCEID0 },
{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
NULL, c9_PMCEID1 },
- { Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_cp15_regs,
+ NULL, c9_PMCCNTR },
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_pmxevtyper },
{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
@@ -1136,6 +1207,38 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+ /* PMEVCNTRn */
+ PMU_PMEVCNTR(0),
+ PMU_PMEVCNTR(1),
+ PMU_PMEVCNTR(2),
+ PMU_PMEVCNTR(3),
+ PMU_PMEVCNTR(4),
+ PMU_PMEVCNTR(5),
+ PMU_PMEVCNTR(6),
+ PMU_PMEVCNTR(7),
+ PMU_PMEVCNTR(8),
+ PMU_PMEVCNTR(9),
+ PMU_PMEVCNTR(10),
+ PMU_PMEVCNTR(11),
+ PMU_PMEVCNTR(12),
+ PMU_PMEVCNTR(13),
+ PMU_PMEVCNTR(14),
+ PMU_PMEVCNTR(15),
+ PMU_PMEVCNTR(16),
+ PMU_PMEVCNTR(17),
+ PMU_PMEVCNTR(18),
+ PMU_PMEVCNTR(19),
+ PMU_PMEVCNTR(20),
+ PMU_PMEVCNTR(21),
+ PMU_PMEVCNTR(22),
+ PMU_PMEVCNTR(23),
+ PMU_PMEVCNTR(24),
+ PMU_PMEVCNTR(25),
+ PMU_PMEVCNTR(26),
+ PMU_PMEVCNTR(27),
+ PMU_PMEVCNTR(28),
+ PMU_PMEVCNTR(29),
+ PMU_PMEVCNTR(30),
/* PMEVTYPERn */
PMU_PMEVTYPER(0),
PMU_PMEVTYPER(1),
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 11/21] KVM: ARM64: Add access handler for PMXEVCNTR register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (9 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 16:33 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 12/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register Shannon Zhao
` (10 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Accessing PMXEVCNTR register is mapped to the PMEVCNTRn or PMCCNTR which
is selected by PMSELR.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 44 ++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 42 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f7a73b5..2304937 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -516,6 +516,46 @@ out:
return true;
}
+static bool access_pmu_pmxevcntr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u64 pmcr, idx, val;
+
+ if (!vcpu_mode_is_32bit(vcpu)) {
+ pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
+ idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
+
+ if (!pmu_counter_idx_valid(pmcr, idx))
+ goto out;
+
+ val = kvm_pmu_get_counter_value(vcpu, idx);
+ if (!p->is_write) {
+ p->regval = val;
+ goto out;
+ }
+
+ vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + idx) += (s64)p->regval - val;
+ } else {
+ pmcr = vcpu_cp15(vcpu, c9_PMCR);
+ idx = vcpu_cp15(vcpu, c9_PMSELR) & ARMV8_COUNTER_MASK;
+
+ if (!pmu_counter_idx_valid(pmcr, idx))
+ goto out;
+
+ val = kvm_pmu_get_counter_value(vcpu, idx);
+ if (!p->is_write) {
+ p->regval = val;
+ goto out;
+ }
+
+ vcpu_cp15(vcpu, c14_PMEVCNTR0 + idx) += (s64)p->regval - val;
+ }
+
+out:
+ return true;
+}
+
/* PMU registers accessor. */
static bool access_pmu_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -804,7 +844,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_pmu_pmxevtyper },
/* PMXEVCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
- trap_raz_wi },
+ access_pmu_pmxevcntr },
/* PMUSERENR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
trap_raz_wi },
@@ -1192,7 +1232,7 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_cp15_regs,
NULL, c9_PMCCNTR },
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_pmxevtyper },
- { Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_pmxevcntr },
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 12/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (10 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 11/21] KVM: ARM64: Add access handler for PMXEVCNTR register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 16:42 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 13/21] KVM: ARM64: Add reset and access handlers for PMINTENSET and PMINTENCLR register Shannon Zhao
` (9 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMCNTENSET or PMCNTENCLR register.
When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 39 +++++++++++++++++++++++++++++++++++----
include/kvm/arm_pmu.h | 4 ++++
virt/kvm/arm/pmu.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 86 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2304937..a780cb5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -577,6 +577,21 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
vcpu_sys_reg(vcpu, r->reg) = p->regval;
break;
}
+ case PMCNTENSET_EL0: {
+ val = p->regval;
+ if (r->Op2 == 1) {
+ /* accessing PMCNTENSET_EL0 */
+ kvm_pmu_enable_counter(vcpu, val,
+ vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E);
+ vcpu_sys_reg(vcpu, r->reg) |= val;
+ } else {
+
+ /* accessing PMCNTENCLR_EL0 */
+ kvm_pmu_disable_counter(vcpu, val);
+ vcpu_sys_reg(vcpu, r->reg) &= ~val;
+ }
+ break;
+ }
case PMCR_EL0: {
/* Only update writeable bits of PMCR */
val = vcpu_sys_reg(vcpu, r->reg);
@@ -817,10 +832,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_pmu_regs, reset_pmcr, PMCR_EL0, },
/* PMCNTENSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
/* PMCNTENCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
/* PMOVSCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
trap_raz_wi },
@@ -1137,6 +1152,20 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
vcpu_cp15(vcpu, r->reg) = p->regval;
break;
}
+ case c9_PMCNTENSET: {
+ val = p->regval;
+ if (r->Op2 == 1) {
+ /* accessing c9_PMCNTENSET */
+ kvm_pmu_enable_counter(vcpu, val,
+ vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_E);
+ vcpu_cp15(vcpu, r->reg) |= val;
+ } else {
+ /* accessing c9_PMCNTENCLR */
+ kvm_pmu_disable_counter(vcpu, val);
+ vcpu_cp15(vcpu, r->reg) &= ~val;
+ }
+ break;
+ }
case c9_PMCR: {
/* Only update writeable bits of PMCR */
val = vcpu_cp15(vcpu, r->reg);
@@ -1220,8 +1249,10 @@ static const struct sys_reg_desc cp15_regs[] = {
/* PMU */
{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
NULL, c9_PMCR },
- { Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
- { Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmu_cp15_regs,
+ NULL, c9_PMCNTENSET },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
+ NULL, c9_PMCNTENSET },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
NULL, c9_PMSELR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 36bde48..e731656 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,8 @@ struct kvm_pmu {
#ifdef CONFIG_KVM_ARM_PMU
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx);
#else
@@ -46,6 +48,8 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
{
return 0;
}
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx) {}
#endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 15babf1..45586d2 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -90,6 +90,53 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
}
/**
+ * kvm_pmu_enable_counter - enable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENSET register
+ * @all_enable: the value of PMCR.E
+ *
+ * Call perf_event_enable to start counting the perf event
+ */
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable)
+{
+ int i;
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+ struct kvm_pmc *pmc;
+
+ if (!all_enable)
+ return;
+
+ for_each_set_bit(i, (const unsigned long *)&val, ARMV8_MAX_COUNTERS) {
+ pmc = &pmu->pmc[i];
+ if (pmc->perf_event) {
+ perf_event_enable(pmc->perf_event);
+ if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+ kvm_debug("fail to enable perf event\n");
+ }
+ }
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENCLR register
+ *
+ * Call perf_event_disable to stop counting the perf event
+ */
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
+{
+ int i;
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+ struct kvm_pmc *pmc;
+
+ for_each_set_bit(i, (const unsigned long *)&val, ARMV8_MAX_COUNTERS) {
+ pmc = &pmu->pmc[i];
+ if (pmc->perf_event)
+ perf_event_disable(pmc->perf_event);
+ }
+}
+
+/**
* kvm_pmu_set_counter_event_type - set selected counter to monitor some event
* @vcpu: The vcpu pointer
* @data: The data guest writes to PMXEVTYPER_EL0
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 13/21] KVM: ARM64: Add reset and access handlers for PMINTENSET and PMINTENCLR register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (11 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 12/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 14/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register Shannon Zhao
` (8 subsequent siblings)
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMINTENSET or PMINTENCLR register.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 28 ++++++++++++++++++++++++----
1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a780cb5..c1dffb2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -592,6 +592,15 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
}
break;
}
+ case PMINTENSET_EL1: {
+ if (r->Op2 == 1)
+ /* accessing PMINTENSET_EL1 */
+ vcpu_sys_reg(vcpu, r->reg) |= p->regval;
+ else
+ /* accessing PMINTENCLR_EL1 */
+ vcpu_sys_reg(vcpu, r->reg) &= ~p->regval;
+ break;
+ }
case PMCR_EL0: {
/* Only update writeable bits of PMCR */
val = vcpu_sys_reg(vcpu, r->reg);
@@ -789,10 +798,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* PMINTENSET_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMINTENSET_EL1 },
/* PMINTENCLR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMINTENSET_EL1 },
/* MAIR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1166,6 +1175,15 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
}
break;
}
+ case c9_PMINTENSET: {
+ if (r->Op2 == 1)
+ /* accessing c9_PMCNTENSET */
+ vcpu_cp15(vcpu, r->reg) |= p->regval;
+ else
+ /* accessing c9_PMCNTENCLR */
+ vcpu_cp15(vcpu, r->reg) &= ~p->regval;
+ break;
+ }
case c9_PMCR: {
/* Only update writeable bits of PMCR */
val = vcpu_cp15(vcpu, r->reg);
@@ -1265,8 +1283,10 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_pmxevtyper },
{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_pmxevcntr },
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
- { Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
- { Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
+ NULL, c9_PMINTENSET },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
+ NULL, c9_PMINTENSET },
{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 14/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (12 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 13/21] KVM: ARM64: Add reset and access handlers for PMINTENSET and PMINTENCLR register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 16:59 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 15/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register Shannon Zhao
` (7 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMOVSSET or PMOVSCLR register.
When writing non-zero value to PMOVSSET, pend PMU interrupt. When the
value writing to PMOVSCLR is equal to the current value, clear the PMU
pending interrupt.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 27 ++++++++++++++++--
include/kvm/arm_pmu.h | 4 +++
virt/kvm/arm/pmu.c | 72 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 100 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c1dffb2..c830fde 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -601,6 +601,15 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
vcpu_sys_reg(vcpu, r->reg) &= ~p->regval;
break;
}
+ case PMOVSSET_EL0: {
+ if (r->CRm == 14)
+ /* accessing PMOVSSET_EL0 */
+ kvm_pmu_overflow_set(vcpu, p->regval);
+ else
+ /* accessing PMOVSCLR_EL0 */
+ kvm_pmu_overflow_clear(vcpu, p->regval);
+ break;
+ }
case PMCR_EL0: {
/* Only update writeable bits of PMCR */
val = vcpu_sys_reg(vcpu, r->reg);
@@ -847,7 +856,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
/* PMOVSCLR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
/* PMSWINC_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
trap_raz_wi },
@@ -874,7 +883,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
trap_raz_wi },
/* PMOVSSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
/* TPIDR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
@@ -1184,6 +1193,15 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
vcpu_cp15(vcpu, r->reg) &= ~p->regval;
break;
}
+ case c9_PMOVSSET: {
+ if (r->CRm == 14)
+ /* accessing c9_PMOVSSET */
+ kvm_pmu_overflow_set(vcpu, p->regval);
+ else
+ /* accessing c9_PMOVSCLR */
+ kvm_pmu_overflow_clear(vcpu, p->regval);
+ break;
+ }
case c9_PMCR: {
/* Only update writeable bits of PMCR */
val = vcpu_cp15(vcpu, r->reg);
@@ -1271,7 +1289,8 @@ static const struct sys_reg_desc cp15_regs[] = {
NULL, c9_PMCNTENSET },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
NULL, c9_PMCNTENSET },
- { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
+ NULL, c9_PMOVSSET },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
NULL, c9_PMSELR },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
@@ -1287,6 +1306,8 @@ static const struct sys_reg_desc cp15_regs[] = {
NULL, c9_PMINTENSET },
{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
NULL, c9_PMINTENSET },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmu_cp15_regs,
+ NULL, c9_PMOVSSET },
{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index e731656..a76df52 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,8 @@ struct kvm_pmu {
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx);
#else
@@ -50,6 +52,8 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
}
void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx) {}
#endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 45586d2..ba7d11c 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -136,6 +136,78 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
}
}
+static u32 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+ u32 val;
+
+ if (!vcpu_mode_is_32bit(vcpu))
+ val = (vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT)
+ & ARMV8_PMCR_N_MASK;
+ else
+ val = (vcpu_cp15(vcpu, c9_PMCR) >> ARMV8_PMCR_N_SHIFT)
+ & ARMV8_PMCR_N_MASK;
+
+ return GENMASK(val - 1, 0) | BIT(ARMV8_COUNTER_MASK);
+}
+
+/**
+ * kvm_pmu_overflow_clear - clear PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSCLR register
+ * @reg: the current value of PMOVSCLR register
+ */
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val)
+{
+ u32 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+ if (!vcpu_mode_is_32bit(vcpu)) {
+ vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask;
+ vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~val;
+ val = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+ } else {
+ vcpu_cp15(vcpu, c9_PMOVSSET) &= mask;
+ vcpu_cp15(vcpu, c9_PMOVSSET) &= ~val;
+ val = vcpu_cp15(vcpu, c9_PMOVSSET);
+ }
+
+ /* If all overflow bits are cleared, kick the vcpu to clear interrupt
+ * pending status.
+ */
+ if (val == 0)
+ kvm_vcpu_kick(vcpu);
+}
+
+/**
+ * kvm_pmu_overflow_set - set PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSSET register
+ */
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
+{
+ u32 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+ val &= mask;
+ if (val == 0)
+ return;
+
+ if (!vcpu_mode_is_32bit(vcpu)) {
+ vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask;
+ vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= val;
+ val = vcpu_sys_reg(vcpu, PMCNTENSET_EL0)
+ & vcpu_sys_reg(vcpu, PMINTENSET_EL1)
+ & vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+ } else {
+ vcpu_cp15(vcpu, c9_PMOVSSET) &= mask;
+ vcpu_cp15(vcpu, c9_PMOVSSET) |= val;
+ val = vcpu_cp15(vcpu, c9_PMCNTENSET)
+ & vcpu_cp15(vcpu, c9_PMINTENSET)
+ & vcpu_cp15(vcpu, c9_PMOVSSET);
+ }
+
+ if (val != 0)
+ kvm_vcpu_kick(vcpu);
+}
+
/**
* kvm_pmu_set_counter_event_type - set selected counter to monitor some event
* @vcpu: The vcpu pointer
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 15/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (13 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 14/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 17:03 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 16/21] KVM: ARM64: Add reset and access handlers for PMSWINC register Shannon Zhao
` (6 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
The reset value of PMUSERENR_EL0 is UNKNOWN, use reset_unknown.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c830fde..80b66c0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -880,7 +880,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_pmu_pmxevcntr },
/* PMUSERENR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMUSERENR_EL0 },
/* PMOVSSET_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
@@ -1301,7 +1301,8 @@ static const struct sys_reg_desc cp15_regs[] = {
NULL, c9_PMCCNTR },
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_pmxevtyper },
{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_pmxevcntr },
- { Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmu_cp15_regs,
+ NULL, c9_PMUSERENR, 0 },
{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
NULL, c9_PMINTENSET },
{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 16/21] KVM: ARM64: Add reset and access handlers for PMSWINC register
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (14 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 15/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 17/21] KVM: ARM64: Add helper to handle PMCR register bits Shannon Zhao
` (5 subsequent siblings)
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Add access handler which emulates writing and reading PMSWINC
register and add support for creating software increment event.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 16 +++++++++++++++-
include/kvm/arm_pmu.h | 2 ++
virt/kvm/arm/pmu.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 61 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 80b66c0..9baa654 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -610,6 +610,10 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
kvm_pmu_overflow_clear(vcpu, p->regval);
break;
}
+ case PMSWINC_EL0: {
+ kvm_pmu_software_increment(vcpu, p->regval);
+ break;
+ }
case PMCR_EL0: {
/* Only update writeable bits of PMCR */
val = vcpu_sys_reg(vcpu, r->reg);
@@ -633,6 +637,8 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
p->regval = val;
break;
}
+ case PMSWINC_EL0:
+ return read_zero(vcpu, p);
case PMCR_EL0: {
/* PMCR.P & PMCR.C are RAZ */
val = vcpu_sys_reg(vcpu, r->reg)
@@ -859,7 +865,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
/* PMSWINC_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
- trap_raz_wi },
+ access_pmu_regs, reset_unknown, PMSWINC_EL0 },
/* PMSELR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
access_pmu_regs, reset_unknown, PMSELR_EL0 },
@@ -1202,6 +1208,10 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
kvm_pmu_overflow_clear(vcpu, p->regval);
break;
}
+ case c9_PMSWINC: {
+ kvm_pmu_software_increment(vcpu, p->regval);
+ break;
+ }
case c9_PMCR: {
/* Only update writeable bits of PMCR */
val = vcpu_cp15(vcpu, r->reg);
@@ -1225,6 +1235,8 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
p->regval = val;
break;
}
+ case c9_PMSWINC:
+ return read_zero(vcpu, p);
case c9_PMCR: {
/* PMCR.P & PMCR.C are RAZ */
val = vcpu_cp15(vcpu, r->reg)
@@ -1291,6 +1303,8 @@ static const struct sys_reg_desc cp15_regs[] = {
NULL, c9_PMCNTENSET },
{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
NULL, c9_PMOVSSET },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmu_cp15_regs,
+ NULL, c9_PMSWINC },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
NULL, c9_PMSELR },
{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index a76df52..d12450a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -43,6 +43,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val);
void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx);
#else
@@ -54,6 +55,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val) {}
void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx) {}
#endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ba7d11c..093e211 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -209,6 +209,46 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
}
/**
+ * kvm_pmu_software_increment - do software increment
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMSWINC register
+ */
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val)
+{
+ int i;
+ u32 type, enable, reg;
+
+ if (val == 0)
+ return;
+
+ for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+ if (!((val >> i) & 0x1))
+ continue;
+ if (!vcpu_mode_is_32bit(vcpu)) {
+ type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
+ & ARMV8_EVTYPE_EVENT;
+ enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+ if ((type == 0) && ((enable >> i) & 0x1)) {
+ vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i)++;
+ reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i);
+ if ((reg & 0xFFFFFFFF) == 0)
+ kvm_pmu_overflow_set(vcpu, 1 >> i);
+ }
+ } else {
+ type = vcpu_cp15(vcpu, c14_PMEVTYPER0 + i)
+ & ARMV8_EVTYPE_EVENT;
+ enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
+ if ((type == 0) && ((enable >> i) & 0x1)) {
+ vcpu_cp15(vcpu, c14_PMEVCNTR0 + i)++;
+ reg = vcpu_cp15(vcpu, c14_PMEVCNTR0 + i);
+ if ((reg & 0xFFFFFFFF) == 0)
+ kvm_pmu_overflow_set(vcpu, 1 >> i);
+ }
+ }
+ }
+}
+
+/**
* kvm_pmu_set_counter_event_type - set selected counter to monitor some event
* @vcpu: The vcpu pointer
* @data: The data guest writes to PMXEVTYPER_EL0
@@ -231,6 +271,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
kvm_pmu_stop_counter(pmc);
eventsel = data & ARMV8_EVTYPE_EVENT;
+ /* For software increment event it does't need to create perf event */
+ if (eventsel == 0)
+ return;
+
memset(&attr, 0, sizeof(struct perf_event_attr));
attr.type = PERF_TYPE_RAW;
attr.size = sizeof(attr);
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 17/21] KVM: ARM64: Add helper to handle PMCR register bits
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (15 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 16/21] KVM: ARM64: Add reset and access handlers for PMSWINC register Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 17:36 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 18/21] KVM: ARM64: Add PMU overflow interrupt routing Shannon Zhao
` (4 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
disabled. When writing 1 to PMCR.P, reset all event counters, not
including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
zero.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 2 ++
include/kvm/arm_pmu.h | 2 ++
virt/kvm/arm/pmu.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 55 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9baa654..110b288 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -620,6 +620,7 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
val &= ~ARMV8_PMCR_MASK;
val |= p->regval & ARMV8_PMCR_MASK;
vcpu_sys_reg(vcpu, r->reg) = val;
+ kvm_pmu_handle_pmcr(vcpu, val);
break;
}
case PMCEID0_EL0:
@@ -1218,6 +1219,7 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
val &= ~ARMV8_PMCR_MASK;
val |= p->regval & ARMV8_PMCR_MASK;
vcpu_cp15(vcpu, r->reg) = val;
+ kvm_pmu_handle_pmcr(vcpu, val);
break;
}
case c9_PMCEID0:
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index d12450a..a131f76 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -46,6 +46,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx);
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
#else
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
{
@@ -58,6 +59,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx) {}
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val) {}
#endif
#endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 093e211..9b9c706 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -151,6 +151,57 @@ static u32 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
}
/**
+ * kvm_pmu_handle_pmcr - handle PMCR register
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCR register
+ */
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val)
+{
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+ struct kvm_pmc *pmc;
+ u32 enable;
+ int i;
+
+ if (val & ARMV8_PMCR_E) {
+ if (!vcpu_mode_is_32bit(vcpu))
+ enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+ else
+ enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
+
+ kvm_pmu_enable_counter(vcpu, enable, true);
+ } else {
+ kvm_pmu_disable_counter(vcpu, 0xffffffffUL);
+ }
+
+ if (val & ARMV8_PMCR_C) {
+ pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
+ if (pmc->perf_event)
+ local64_set(&pmc->perf_event->count, 0);
+ if (!vcpu_mode_is_32bit(vcpu))
+ vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
+ else
+ vcpu_cp15(vcpu, c9_PMCCNTR) = 0;
+ }
+
+ if (val & ARMV8_PMCR_P) {
+ for (i = 0; i < ARMV8_MAX_COUNTERS - 1; i++) {
+ pmc = &pmu->pmc[i];
+ if (pmc->perf_event)
+ local64_set(&pmc->perf_event->count, 0);
+ if (!vcpu_mode_is_32bit(vcpu))
+ vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
+ else
+ vcpu_cp15(vcpu, c14_PMEVCNTR0 + i) = 0;
+ }
+ }
+
+ if (val & ARMV8_PMCR_LC) {
+ pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
+ pmc->bitmask = 0xffffffffffffffffUL;
+ }
+}
+
+/**
* kvm_pmu_overflow_clear - clear PMU overflow interrupt
* @vcpu: The vcpu pointer
* @val: the value guest writes to PMOVSCLR register
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 18/21] KVM: ARM64: Add PMU overflow interrupt routing
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (16 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 17/21] KVM: ARM64: Add helper to handle PMCR register bits Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 17:37 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 19/21] KVM: ARM64: Reset PMU state when resetting vcpu Shannon Zhao
` (3 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when perf event overflows, call
kvm_vcpu_kick() to sync the interrupt.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 2 ++
include/kvm/arm_pmu.h | 2 ++
virt/kvm/arm/pmu.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++-
3 files changed, 55 insertions(+), 1 deletion(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index e06fd29..cd696ef 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
#include <linux/sched.h>
#include <linux/kvm.h>
#include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>
#define CREATE_TRACE_POINTS
#include "trace.h"
@@ -569,6 +570,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
* non-preemptible context.
*/
preempt_disable();
+ kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index a131f76..c4041008 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -38,6 +38,7 @@ struct kvm_pmu {
};
#ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
@@ -48,6 +49,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx);
void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
#else
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
{
return 0;
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 9b9c706..ff182d6 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
#include <linux/perf_event.h>
#include <asm/kvm_emulate.h>
#include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>
/**
* kvm_pmu_get_counter_value - get PMU counter value
@@ -90,6 +91,54 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
}
/**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+ u32 overflow;
+
+ if (pmu->irq_num == -1)
+ return;
+
+ if (!vcpu_mode_is_32bit(vcpu)) {
+ if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E))
+ return;
+
+ overflow = vcpu_sys_reg(vcpu, PMCNTENSET_EL0)
+ & vcpu_sys_reg(vcpu, PMINTENSET_EL1)
+ & vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+ } else {
+ if (!(vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_E))
+ return;
+
+ overflow = vcpu_cp15(vcpu, c9_PMCNTENSET)
+ & vcpu_cp15(vcpu, c9_PMINTENSET)
+ & vcpu_cp15(vcpu, c9_PMOVSSET);
+ }
+
+ kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num,
+ overflow ? 1 : 0);
+}
+
+/**
+ * When perf event overflows, call kvm_pmu_overflow_set to set overflow status.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
+{
+ struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+ struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+ int idx = pmc->idx;
+
+ kvm_pmu_overflow_set(vcpu, BIT(idx));
+}
+
+/**
* kvm_pmu_enable_counter - enable selected PMU counter
* @vcpu: The vcpu pointer
* @val: the value guest writes to PMCNTENSET register
@@ -341,7 +390,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
/* The initial sample period (overflow count) of an event. */
attr.sample_period = (-counter) & pmc->bitmask;
- event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+ event = perf_event_create_kernel_counter(&attr, -1, current,
+ kvm_pmu_perf_overflow, pmc);
if (IS_ERR(event)) {
printk_once("kvm: pmu event creation failed %ld\n",
PTR_ERR(event));
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 19/21] KVM: ARM64: Reset PMU state when resetting vcpu
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (17 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 18/21] KVM: ARM64: Add PMU overflow interrupt routing Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 20/21] KVM: ARM64: Free perf event of PMU when destroying vcpu Shannon Zhao
` (2 subsequent siblings)
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
When resetting vcpu, it needs to reset the PMU state to initial status.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/reset.c | 3 +++
include/kvm/arm_pmu.h | 2 ++
virt/kvm/arm/pmu.c | 17 +++++++++++++++++
3 files changed, 22 insertions(+)
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f34745c..dfbce78 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset system registers */
kvm_reset_sys_regs(vcpu);
+ /* Reset PMU */
+ kvm_pmu_vcpu_reset(vcpu);
+
/* Reset timer */
return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
}
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index c4041008..e0f5bfe 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -38,6 +38,7 @@ struct kvm_pmu {
};
#ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
@@ -49,6 +50,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
u32 select_idx);
void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
#else
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
{
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ff182d6..01e8eb2 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -91,6 +91,23 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
}
/**
+ * kvm_pmu_vcpu_reset - reset pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
+{
+ int i;
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+ for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+ kvm_pmu_stop_counter(&pmu->pmc[i]);
+ pmu->pmc[i].idx = i;
+ pmu->pmc[i].bitmask = 0xffffffffUL;
+ }
+}
+
+/**
* kvm_pmu_flush_hwstate - flush pmu state to cpu
* @vcpu: The vcpu pointer
*
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 20/21] KVM: ARM64: Free perf event of PMU when destroying vcpu
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (18 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 19/21] KVM: ARM64: Reset PMU state when resetting vcpu Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 21/21] KVM: ARM64: Add a new kvm ARM PMU device Shannon Zhao
2015-12-08 17:56 ` [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Marc Zyngier
21 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
When KVM frees VCPU, it needs to free the perf_event of PMU.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 1 +
include/kvm/arm_pmu.h | 2 ++
virt/kvm/arm/pmu.c | 21 +++++++++++++++++++++
3 files changed, 24 insertions(+)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index cd696ef..cea2176 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -259,6 +259,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
kvm_mmu_free_memory_caches(vcpu);
kvm_timer_vcpu_terminate(vcpu);
kvm_vgic_vcpu_destroy(vcpu);
+ kvm_pmu_vcpu_destroy(vcpu);
kmem_cache_free(kvm_vcpu_cache, vcpu);
}
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index e0f5bfe..f7bc4bf 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,7 @@ struct kvm_pmu {
#ifdef CONFIG_KVM_ARM_PMU
void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
@@ -51,6 +52,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
#else
void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
{
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 01e8eb2..f8007c7 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -108,6 +108,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
}
/**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+ int i;
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+ for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+ struct kvm_pmc *pmc = &pmu->pmc[i];
+
+ if (pmc->perf_event) {
+ perf_event_disable(pmc->perf_event);
+ perf_event_release_kernel(pmc->perf_event);
+ pmc->perf_event = NULL;
+ }
+ }
+}
+
+/**
* kvm_pmu_flush_hwstate - flush pmu state to cpu
* @vcpu: The vcpu pointer
*
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 21/21] KVM: ARM64: Add a new kvm ARM PMU device
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (19 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 20/21] KVM: ARM64: Free perf event of PMU when destroying vcpu Shannon Zhao
@ 2015-12-08 12:47 ` Shannon Zhao
2015-12-08 17:43 ` Marc Zyngier
2015-12-08 17:56 ` [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Marc Zyngier
21 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 12:47 UTC (permalink / raw)
To: linux-arm-kernel
From: Shannon Zhao <shannon.zhao@linaro.org>
Add a new kvm device type KVM_DEV_TYPE_ARM_PMU_V3 for ARM PMU. Implement
the kvm_device_ops for it.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
Documentation/virtual/kvm/devices/arm-pmu.txt | 16 +++++
arch/arm64/include/uapi/asm/kvm.h | 3 +
include/linux/kvm_host.h | 1 +
include/uapi/linux/kvm.h | 2 +
virt/kvm/arm/pmu.c | 93 +++++++++++++++++++++++++++
virt/kvm/kvm_main.c | 4 ++
6 files changed, 119 insertions(+)
create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
diff --git a/Documentation/virtual/kvm/devices/arm-pmu.txt b/Documentation/virtual/kvm/devices/arm-pmu.txt
new file mode 100644
index 0000000..5121f1f
--- /dev/null
+++ b/Documentation/virtual/kvm/devices/arm-pmu.txt
@@ -0,0 +1,16 @@
+ARM Virtual Performance Monitor Unit (vPMU)
+===========================================
+
+Device types supported:
+ KVM_DEV_TYPE_ARM_PMU_V3 ARM Performance Monitor Unit v3
+
+Instantiate one PMU instance for per VCPU through this API.
+
+Groups:
+ KVM_DEV_ARM_PMU_GRP_IRQ
+ Attributes:
+ A value describing the interrupt number of PMU overflow interrupt. This
+ interrupt should be a PPI.
+
+ Errors:
+ -EINVAL: Value set is out of the expected range (from 16 to 31)
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 2d4ca4b..568afa2 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -204,6 +204,9 @@ struct kvm_arch_memory_slot {
#define KVM_DEV_ARM_VGIC_GRP_CTRL 4
#define KVM_DEV_ARM_VGIC_CTRL_INIT 0
+/* Device Control API: ARM PMU */
+#define KVM_DEV_ARM_PMU_GRP_IRQ 0
+
/* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24
#define KVM_ARM_IRQ_TYPE_MASK 0xff
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c923350..608dea6 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1161,6 +1161,7 @@ extern struct kvm_device_ops kvm_mpic_ops;
extern struct kvm_device_ops kvm_xics_ops;
extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
extern struct kvm_device_ops kvm_arm_vgic_v3_ops;
+extern struct kvm_device_ops kvm_arm_pmu_ops;
#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 03f3618..4ba6fdd 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1032,6 +1032,8 @@ enum kvm_device_type {
#define KVM_DEV_TYPE_FLIC KVM_DEV_TYPE_FLIC
KVM_DEV_TYPE_ARM_VGIC_V3,
#define KVM_DEV_TYPE_ARM_VGIC_V3 KVM_DEV_TYPE_ARM_VGIC_V3
+ KVM_DEV_TYPE_ARM_PMU_V3,
+#define KVM_DEV_TYPE_ARM_PMU_V3 KVM_DEV_TYPE_ARM_PMU_V3
KVM_DEV_TYPE_MAX,
};
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index f8007c7..a84a4d7 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -19,10 +19,13 @@
#include <linux/kvm.h>
#include <linux/kvm_host.h>
#include <linux/perf_event.h>
+#include <linux/uaccess.h>
#include <asm/kvm_emulate.h>
#include <kvm/arm_pmu.h>
#include <kvm/arm_vgic.h>
+#include "vgic.h"
+
/**
* kvm_pmu_get_counter_value - get PMU counter value
* @vcpu: The vcpu pointer
@@ -438,3 +441,93 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
pmc->perf_event = event;
}
+
+static inline bool kvm_arm_pmu_initialized(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.pmu.irq_num != -1;
+}
+
+static int kvm_arm_pmu_set_irq(struct kvm *kvm, int irq)
+{
+ int j;
+ struct kvm_vcpu *vcpu;
+
+ kvm_for_each_vcpu(j, vcpu, kvm) {
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+ if (!kvm_arm_pmu_initialized(vcpu))
+ kvm_debug("Set kvm ARM PMU irq: %d\n", irq);
+ pmu->irq_num = irq;
+ }
+
+ return 0;
+}
+
+static int kvm_arm_pmu_create(struct kvm_device *dev, u32 type)
+{
+ int i;
+ struct kvm_vcpu *vcpu;
+ struct kvm *kvm = dev->kvm;
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+ memset(pmu, 0, sizeof(*pmu));
+ kvm_pmu_vcpu_reset(vcpu);
+ pmu->irq_num = -1;
+ }
+
+ return 0;
+}
+
+static void kvm_arm_pmu_destroy(struct kvm_device *dev)
+{
+ kfree(dev);
+}
+
+static int kvm_arm_pmu_set_attr(struct kvm_device *dev,
+ struct kvm_device_attr *attr)
+{
+ switch (attr->group) {
+ case KVM_DEV_ARM_PMU_GRP_IRQ: {
+ int __user *uaddr = (int __user *)(long)attr->addr;
+ int reg;
+
+ if (get_user(reg, uaddr))
+ return -EFAULT;
+
+ if (reg < VGIC_NR_SGIS || reg >= VGIC_NR_PRIVATE_IRQS)
+ return -EINVAL;
+
+ return kvm_arm_pmu_set_irq(dev->kvm, reg);
+ }
+ }
+
+ return -ENXIO;
+}
+
+static int kvm_arm_pmu_get_attr(struct kvm_device *dev,
+ struct kvm_device_attr *attr)
+{
+ return 0;
+}
+
+static int kvm_arm_pmu_has_attr(struct kvm_device *dev,
+ struct kvm_device_attr *attr)
+{
+ switch (attr->group) {
+ case KVM_DEV_ARM_PMU_GRP_IRQ:
+ return 0;
+ }
+
+ return -ENXIO;
+}
+
+struct kvm_device_ops kvm_arm_pmu_ops = {
+ .name = "kvm-arm-pmu",
+ .create = kvm_arm_pmu_create,
+ .destroy = kvm_arm_pmu_destroy,
+ .set_attr = kvm_arm_pmu_set_attr,
+ .get_attr = kvm_arm_pmu_get_attr,
+ .has_attr = kvm_arm_pmu_has_attr,
+};
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 484079e..81a42cc 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2647,6 +2647,10 @@ static struct kvm_device_ops *kvm_device_ops_table[KVM_DEV_TYPE_MAX] = {
#ifdef CONFIG_KVM_XICS
[KVM_DEV_TYPE_XICS] = &kvm_xics_ops,
#endif
+
+#ifdef CONFIG_KVM_ARM_PMU
+ [KVM_DEV_TYPE_ARM_PMU_V3] = &kvm_arm_pmu_ops,
+#endif
};
int kvm_register_device_ops(struct kvm_device_ops *ops, u32 type)
--
2.0.4
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu
2015-12-08 12:47 ` [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu Shannon Zhao
@ 2015-12-08 13:37 ` Marc Zyngier
2015-12-08 13:53 ` Will Deacon
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 13:37 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> Here we plan to support virtual PMU for guest by full software
> emulation, so define some basic structs and functions preparing for
> futher steps. Define struct kvm_pmc for performance monitor counter and
> struct kvm_pmu for performance monitor unit for each vcpu. According to
> ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.
>
> Since this only supports ARM64 (or PMUv3), add a separate config symbol
> for it.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/include/asm/kvm_host.h | 2 ++
> arch/arm64/kvm/Kconfig | 8 ++++++++
> include/kvm/arm_pmu.h | 40 +++++++++++++++++++++++++++++++++++++++
> 3 files changed, 50 insertions(+)
> create mode 100644 include/kvm/arm_pmu.h
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a35ce72..42e15bb 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -37,6 +37,7 @@
>
> #include <kvm/arm_vgic.h>
> #include <kvm/arm_arch_timer.h>
> +#include <kvm/arm_pmu.h>
>
> #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>
> @@ -132,6 +133,7 @@ struct kvm_vcpu_arch {
> /* VGIC state */
> struct vgic_cpu vgic_cpu;
> struct arch_timer_cpu timer_cpu;
> + struct kvm_pmu pmu;
>
> /*
> * Anything that is not used directly from assembly code goes
> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> index a5272c0..66da9a2 100644
> --- a/arch/arm64/kvm/Kconfig
> +++ b/arch/arm64/kvm/Kconfig
> @@ -36,6 +36,7 @@ config KVM
> select HAVE_KVM_EVENTFD
> select HAVE_KVM_IRQFD
> select KVM_ARM_VGIC_V3
> + select KVM_ARM_PMU
What if HW_PERF_EVENTS is not selected? Also, selecting HW_PERF_EVENTS
is not enough, and you probably need PERF_EVENTS as well, So this should
probably read:
select KVM_ARM_PMU if (HW_PERF_EVENTS && PERF_EVENTS)
> ---help---
> Support hosting virtualized guest machines.
> We don't support KVM with 16K page tables yet, due to the multiple
> @@ -48,6 +49,13 @@ config KVM_ARM_HOST
> ---help---
> Provides host support for ARM processors.
>
> +config KVM_ARM_PMU
> + bool
> + depends on KVM_ARM_HOST && HW_PERF_EVENTS
and this line should be dropped.
> + ---help---
> + Adds support for a virtual Performance Monitoring Unit (PMU) in
> + virtual machines.
> +
> source drivers/vhost/Kconfig
>
> endif # VIRTUALIZATION
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> new file mode 100644
> index 0000000..dea78f8
> --- /dev/null
> +++ b/include/kvm/arm_pmu.h
> @@ -0,0 +1,40 @@
> +/*
> + * Copyright (C) 2015 Linaro Ltd.
> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ASM_ARM_KVM_PMU_H
> +#define __ASM_ARM_KVM_PMU_H
> +
> +#include <linux/perf_event.h>
> +#ifdef CONFIG_KVM_ARM_PMU
> +#include <asm/pmu.h>
> +#endif
> +
> +struct kvm_pmc {
> + u8 idx;/* index into the pmu->pmc array */
> + struct perf_event *perf_event;
> + u64 bitmask;
> +};
> +
> +struct kvm_pmu {
> +#ifdef CONFIG_KVM_ARM_PMU
> + /* PMU IRQ Number per VCPU */
> + int irq_num;
> + struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
> +#endif
> +};
> +
> +#endif
>
The repetition of #ifdef CONFIG_KVM_ARM_PMU is a bit ugly. How about
something like this instead:
#ifndef __ASM_ARM_KVM_PMU_H
#define __ASM_ARM_KVM_PMU_H
#ifdef CONFIG_KVM_ARM_PMU
#include <linux/perf_event.h>
#include <asm/pmu.h>
struct kvm_pmc {
u8 idx;/* index into the pmu->pmc array */
struct perf_event *perf_event;
u64 bitmask;
};
struct kvm_pmu {
/* PMU IRQ Number per VCPU */
int irq_num;
struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
};
#else
struct kvm_pmu {
};
#endif
#endif
and you can then populate the rest of the function prototype and stubs
where needed.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu
2015-12-08 13:37 ` Marc Zyngier
@ 2015-12-08 13:53 ` Will Deacon
2015-12-08 14:10 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Will Deacon @ 2015-12-08 13:53 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Dec 08, 2015 at 01:37:14PM +0000, Marc Zyngier wrote:
> On 08/12/15 12:47, Shannon Zhao wrote:
> > From: Shannon Zhao <shannon.zhao@linaro.org>
> >
> > Here we plan to support virtual PMU for guest by full software
> > emulation, so define some basic structs and functions preparing for
> > futher steps. Define struct kvm_pmc for performance monitor counter and
> > struct kvm_pmu for performance monitor unit for each vcpu. According to
> > ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.
> >
> > Since this only supports ARM64 (or PMUv3), add a separate config symbol
> > for it.
> >
> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 2 ++
> > arch/arm64/kvm/Kconfig | 8 ++++++++
> > include/kvm/arm_pmu.h | 40 +++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 50 insertions(+)
> > create mode 100644 include/kvm/arm_pmu.h
[...]
> > diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> > index a5272c0..66da9a2 100644
> > --- a/arch/arm64/kvm/Kconfig
> > +++ b/arch/arm64/kvm/Kconfig
> > @@ -36,6 +36,7 @@ config KVM
> > select HAVE_KVM_EVENTFD
> > select HAVE_KVM_IRQFD
> > select KVM_ARM_VGIC_V3
> > + select KVM_ARM_PMU
>
> What if HW_PERF_EVENTS is not selected? Also, selecting HW_PERF_EVENTS
> is not enough, and you probably need PERF_EVENTS as well, So this should
> probably read:
>
> select KVM_ARM_PMU if (HW_PERF_EVENTS && PERF_EVENTS)
HW_PERF_EVENTS depends on ARM_PMU which in turn depends on PERF_EVENTS.
Will
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu
2015-12-08 13:53 ` Will Deacon
@ 2015-12-08 14:10 ` Marc Zyngier
2015-12-08 14:14 ` Shannon Zhao
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 14:10 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 13:53, Will Deacon wrote:
> On Tue, Dec 08, 2015 at 01:37:14PM +0000, Marc Zyngier wrote:
>> On 08/12/15 12:47, Shannon Zhao wrote:
>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>
>>> Here we plan to support virtual PMU for guest by full software
>>> emulation, so define some basic structs and functions preparing for
>>> futher steps. Define struct kvm_pmc for performance monitor counter and
>>> struct kvm_pmu for performance monitor unit for each vcpu. According to
>>> ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.
>>>
>>> Since this only supports ARM64 (or PMUv3), add a separate config symbol
>>> for it.
>>>
>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>> ---
>>> arch/arm64/include/asm/kvm_host.h | 2 ++
>>> arch/arm64/kvm/Kconfig | 8 ++++++++
>>> include/kvm/arm_pmu.h | 40 +++++++++++++++++++++++++++++++++++++++
>>> 3 files changed, 50 insertions(+)
>>> create mode 100644 include/kvm/arm_pmu.h
>
> [...]
>
>>> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
>>> index a5272c0..66da9a2 100644
>>> --- a/arch/arm64/kvm/Kconfig
>>> +++ b/arch/arm64/kvm/Kconfig
>>> @@ -36,6 +36,7 @@ config KVM
>>> select HAVE_KVM_EVENTFD
>>> select HAVE_KVM_IRQFD
>>> select KVM_ARM_VGIC_V3
>>> + select KVM_ARM_PMU
>>
>> What if HW_PERF_EVENTS is not selected? Also, selecting HW_PERF_EVENTS
>> is not enough, and you probably need PERF_EVENTS as well, So this should
>> probably read:
>>
>> select KVM_ARM_PMU if (HW_PERF_EVENTS && PERF_EVENTS)
>
> HW_PERF_EVENTS depends on ARM_PMU which in turn depends on PERF_EVENTS.
in which case, let's make it:
select KVM_ARM_PMU if HW_PERF_EVENTS
which should give us the minimal chain. I hate the kernel config
language! ;-)
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu
2015-12-08 14:10 ` Marc Zyngier
@ 2015-12-08 14:14 ` Shannon Zhao
0 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-08 14:14 UTC (permalink / raw)
To: linux-arm-kernel
On 2015/12/8 22:10, Marc Zyngier wrote:
> On 08/12/15 13:53, Will Deacon wrote:
>> On Tue, Dec 08, 2015 at 01:37:14PM +0000, Marc Zyngier wrote:
>>> On 08/12/15 12:47, Shannon Zhao wrote:
>>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>>
>>>> Here we plan to support virtual PMU for guest by full software
>>>> emulation, so define some basic structs and functions preparing for
>>>> futher steps. Define struct kvm_pmc for performance monitor counter and
>>>> struct kvm_pmu for performance monitor unit for each vcpu. According to
>>>> ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.
>>>>
>>>> Since this only supports ARM64 (or PMUv3), add a separate config symbol
>>>> for it.
>>>>
>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> ---
>>>> arch/arm64/include/asm/kvm_host.h | 2 ++
>>>> arch/arm64/kvm/Kconfig | 8 ++++++++
>>>> include/kvm/arm_pmu.h | 40 +++++++++++++++++++++++++++++++++++++++
>>>> 3 files changed, 50 insertions(+)
>>>> create mode 100644 include/kvm/arm_pmu.h
>>
>> [...]
>>
>>>> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
>>>> index a5272c0..66da9a2 100644
>>>> --- a/arch/arm64/kvm/Kconfig
>>>> +++ b/arch/arm64/kvm/Kconfig
>>>> @@ -36,6 +36,7 @@ config KVM
>>>> select HAVE_KVM_EVENTFD
>>>> select HAVE_KVM_IRQFD
>>>> select KVM_ARM_VGIC_V3
>>>> + select KVM_ARM_PMU
>>>
>>> What if HW_PERF_EVENTS is not selected? Also, selecting HW_PERF_EVENTS
>>> is not enough, and you probably need PERF_EVENTS as well, So this should
>>> probably read:
>>>
>>> select KVM_ARM_PMU if (HW_PERF_EVENTS && PERF_EVENTS)
>>
>> HW_PERF_EVENTS depends on ARM_PMU which in turn depends on PERF_EVENTS.
>
Yeah, this is the reason why I choose HW_PERF_EVENTS.
> in which case, let's make it:
>
> select KVM_ARM_PMU if HW_PERF_EVENTS
>
Sure, will do.
> which should give us the minimal chain. I hate the kernel config
> language! ;-)
>
> M.
>
--
Shannon
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
2015-12-08 12:47 ` [PATCH v6 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register Shannon Zhao
@ 2015-12-08 14:23 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 14:23 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> Add reset handler which gets host value of PMCEID0 or PMCEID1. Since
> write action to PMCEID0 or PMCEID1 is ignored, add a new case for this.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
> 1 file changed, 25 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d81f7ac..1bcb2b7 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -452,6 +452,19 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> vcpu_sys_reg(vcpu, r->reg) = val;
> }
>
> +static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> + u64 pmceid;
> +
> + if (r->reg == PMCEID0_EL0)
> + asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
> + else
> + /* PMCEID1_EL0 */
> + asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
> +
> + vcpu_sys_reg(vcpu, r->reg) = pmceid;
> +}
> +
> /* PMU registers accessor. */
> static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> struct sys_reg_params *p,
> @@ -469,6 +482,9 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> vcpu_sys_reg(vcpu, r->reg) = val;
> break;
> }
> + case PMCEID0_EL0:
> + case PMCEID1_EL0:
> + return ignore_write(vcpu, p);
> default:
> vcpu_sys_reg(vcpu, r->reg) = p->regval;
> break;
> @@ -693,10 +709,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> access_pmu_regs, reset_unknown, PMSELR_EL0 },
> /* PMCEID0_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
> - trap_raz_wi },
> + access_pmu_regs, reset_pmceid, PMCEID0_EL0 },
> /* PMCEID1_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
> - trap_raz_wi },
> + access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
> /* PMCCNTR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
> trap_raz_wi },
> @@ -926,6 +942,9 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> vcpu_cp15(vcpu, r->reg) = val;
> break;
> }
> + case c9_PMCEID0:
> + case c9_PMCEID1:
> + return ignore_write(vcpu, p);
> default:
> vcpu_cp15(vcpu, r->reg) = p->regval;
> break;
> @@ -983,8 +1002,10 @@ static const struct sys_reg_desc cp15_regs[] = {
> { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
> NULL, c9_PMSELR },
> - { Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
> - { Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
> + { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
> + NULL, c9_PMCEID0 },
> + { Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
> + NULL, c9_PMCEID1 },
> { Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> { Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
> { Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
>
That's a lot of infrastructure for something that is essentially a
constant that doesn't need to be stored in the sysreg array.
I suggest you drop the constants for PMCEID{0,1}_EL0 and
c9_PMCEID{0,1}, and turn the code into something like this:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 251d517..09c38d0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -453,17 +453,22 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu_sys_reg(vcpu, r->reg) = val;
}
-static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static bool access_pmceid(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
{
u64 pmceid;
- if (r->reg == PMCEID0_EL0)
+ if (p->is_write)
+ return ignore_write(vcpu, p);
+
+ if (!(p->Op2 & 1))
asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
else
- /* PMCEID1_EL0 */
asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
- vcpu_sys_reg(vcpu, r->reg) = pmceid;
+ p->regval = pmceid;
+ return true;
}
static bool pmu_counter_idx_valid(u64 pmcr, u64 idx)
@@ -624,9 +629,6 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
kvm_pmu_handle_pmcr(vcpu, val);
break;
}
- case PMCEID0_EL0:
- case PMCEID1_EL0:
- return ignore_write(vcpu, p);
default:
vcpu_sys_reg(vcpu, r->reg) = p->regval;
break;
@@ -873,10 +875,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_pmu_regs, reset_unknown, PMSELR_EL0 },
/* PMCEID0_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
- access_pmu_regs, reset_pmceid, PMCEID0_EL0 },
+ access_pmceid },
/* PMCEID1_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
- access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
+ access_pmceid },
/* PMCCNTR_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
access_pmu_regs, reset_unknown, PMCCNTR_EL0 },
@@ -1223,9 +1225,6 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
kvm_pmu_handle_pmcr(vcpu, val);
break;
}
- case c9_PMCEID0:
- case c9_PMCEID1:
- return ignore_write(vcpu, p);
default:
vcpu_cp15(vcpu, r->reg) = p->regval;
break;
@@ -1310,10 +1309,8 @@ static const struct sys_reg_desc cp15_regs[] = {
NULL, c9_PMSWINC },
{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
NULL, c9_PMSELR },
- { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
- NULL, c9_PMCEID0 },
- { Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
- NULL, c9_PMCEID1 },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_cp15_regs,
NULL, c9_PMCCNTR },
{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_pmxevtyper },
All we need is an accessor, nothing else.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v6 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
2015-12-08 12:47 ` [PATCH v6 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Shannon Zhao
@ 2015-12-08 15:43 ` Marc Zyngier
2015-12-09 7:38 ` Shannon Zhao
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 15:43 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> When we use tools like perf on host, perf passes the event type and the
> id of this event type category to kernel, then kernel will map them to
> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> register. When getting the event number in KVM, directly use raw event
> type to create a perf_event for it.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/include/asm/pmu.h | 2 +
> arch/arm64/kvm/Makefile | 1 +
> include/kvm/arm_pmu.h | 13 ++++
> virt/kvm/arm/pmu.c | 138 +++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 154 insertions(+)
> create mode 100644 virt/kvm/arm/pmu.c
>
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index 4264ea0..e3cb6b3 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -28,6 +28,8 @@
> #define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
> #define ARMV8_PMCR_X (1 << 4) /* Export to ETM */
> #define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
> +/* Determines which PMCCNTR_EL0 bit generates an overflow */
> +#define ARMV8_PMCR_LC (1 << 6)
> #define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */
> #define ARMV8_PMCR_N_MASK 0x1f
> #define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index 1949fe5..18d56d8 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
> kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
> kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
> kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
> +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index dea78f8..36bde48 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -37,4 +37,17 @@ struct kvm_pmu {
> #endif
> };
>
> +#ifdef CONFIG_KVM_ARM_PMU
> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> + u32 select_idx);
> +#else
> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> +{
> + return 0;
> +}
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> + u32 select_idx) {}
> +#endif
> +
> #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> new file mode 100644
> index 0000000..15babf1
> --- /dev/null
> +++ b/virt/kvm/arm/pmu.c
> @@ -0,0 +1,138 @@
> +/*
> + * Copyright (C) 2015 Linaro Ltd.
> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/cpu.h>
> +#include <linux/kvm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/perf_event.h>
> +#include <asm/kvm_emulate.h>
> +#include <kvm/arm_pmu.h>
> +
> +/**
> + * kvm_pmu_get_counter_value - get PMU counter value
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> +{
> + u64 counter, enabled, running;
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> + struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> + if (!vcpu_mode_is_32bit(vcpu))
> + counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
> + else
> + counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
> +
> + if (pmc->perf_event)
> + counter += perf_event_read_value(pmc->perf_event, &enabled,
> + &running);
> +
> + return counter & pmc->bitmask;
This one confused me for a while. Is it the case that you return
whatever is in the vcpu view of the counter, plus anything that perf
itself has counted? If so, I'd appreciate a comment here...
> +}
> +
> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u32 select_idx)
> +{
> + if (!vcpu_mode_is_32bit(vcpu))
> + return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &
> + (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) >> select_idx);
This looks wrong. Shouldn't it be:
return ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
(vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & (1 << select_idx)));
> + else
> + return (vcpu_sys_reg(vcpu, c9_PMCR) & ARMV8_PMCR_E) &
> + (vcpu_sys_reg(vcpu, c9_PMCNTENSET) >> select_idx);
> +}
Also, I don't really see why we need to check the 32bit version, which
has the exact same content.
> +
> +static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
> +{
> + struct kvm_pmu *pmu;
> + struct kvm_vcpu_arch *vcpu_arch;
> +
> + pmc -= pmc->idx;
> + pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
> + vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
> + return container_of(vcpu_arch, struct kvm_vcpu, arch);
> +}
> +
> +/**
> + * kvm_pmu_stop_counter - stop PMU counter
> + * @pmc: The PMU counter pointer
> + *
> + * If this counter has been configured to monitor some event, release it here.
> + */
> +static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
> +{
> + struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
> + u64 counter;
> +
> + if (pmc->perf_event) {
> + counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> + if (!vcpu_mode_is_32bit(vcpu))
> + vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
> + else
> + vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
Same thing - we don't need to make a difference between 32 and 64bit.
> +
> + perf_event_release_kernel(pmc->perf_event);
> + pmc->perf_event = NULL;
> + }
> +}
> +
> +/**
> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * @vcpu: The vcpu pointer
> + * @data: The data guest writes to PMXEVTYPER_EL0
> + * @select_idx: The number of selected counter
> + *
> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> + * event with given hardware event number. Here we call perf_event API to
> + * emulate this action and create a kernel perf event for it.
> + */
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> + u32 select_idx)
> +{
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> + struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> + struct perf_event *event;
> + struct perf_event_attr attr;
> + u32 eventsel;
> + u64 counter;
> +
> + kvm_pmu_stop_counter(pmc);
Wait. I didn't realize this before, but you have the vcpu right here.
Why don't you pass it as a parameter to kvm_pmu_stop_counter and avoid
the kvm_pmc_to_vcpu thing altogether?
> + eventsel = data & ARMV8_EVTYPE_EVENT;
> +
> + memset(&attr, 0, sizeof(struct perf_event_attr));
> + attr.type = PERF_TYPE_RAW;
> + attr.size = sizeof(attr);
> + attr.pinned = 1;
> + attr.disabled = kvm_pmu_counter_is_enabled(vcpu, select_idx);
> + attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
> + attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
> + attr.exclude_hv = 1; /* Don't count EL2 events */
> + attr.exclude_host = 1; /* Don't count host events */
> + attr.config = eventsel;
> +
> + counter = kvm_pmu_get_counter_value(vcpu, select_idx);
> + /* The initial sample period (overflow count) of an event. */
> + attr.sample_period = (-counter) & pmc->bitmask;
> +
> + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> + if (IS_ERR(event)) {
> + printk_once("kvm: pmu event creation failed %ld\n",
> + PTR_ERR(event));
> + return;
> + }
> +
> + pmc->perf_event = event;
> +}
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 08/21] KVM: ARM64: Add access handler for PMEVTYPERn and PMCCFILTR register
2015-12-08 12:47 ` [PATCH v6 08/21] KVM: ARM64: Add access handler for PMEVTYPERn and PMCCFILTR register Shannon Zhao
@ 2015-12-08 16:17 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 16:17 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> Add access handler which emulates writing and reading PMEVTYPERn or
> PMCCFILTR register. When writing to PMEVTYPERn or PMCCFILTR, call
> kvm_pmu_set_counter_event_type to create a perf_event for the selected
> event type.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/kvm/sys_regs.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 98 insertions(+)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 1bcb2b7..2d8bd15 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -474,6 +474,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>
> if (p->is_write) {
> switch (r->reg) {
> + case PMEVTYPER0_EL0 ... PMCCFILTR_EL0: {
Please don't do that, this is dangerous.
I'm fine with PMEVTYPER0_EL0 ... PMEVTYPER30_EL0, but not with
PMCCFILTR_EL0. It could have been moved to another offset in the
register file, and nobody would notice this. So keep it as a separate
case statement.
> + val = r->reg - PMEVTYPER0_EL0;
> + kvm_pmu_set_counter_event_type(vcpu, p->regval, val);
> + vcpu_sys_reg(vcpu, r->reg) = p->regval;
> + break;
> + }
> case PMCR_EL0: {
> /* Only update writeable bits of PMCR */
> val = vcpu_sys_reg(vcpu, r->reg);
> @@ -522,6 +528,13 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111), \
> trap_wcr, reset_wcr, n, 0, get_wcr, set_wcr }
>
> +/* Macro to expand the PMEVTYPERn_EL0 register */
> +#define PMU_PMEVTYPER_EL0(n) \
> + /* PMEVTYPERn_EL0 */ \
> + { Op0(0b11), Op1(0b011), CRn(0b1110), \
> + CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
> + access_pmu_regs, reset_unknown, (PMEVTYPER0_EL0 + n), }
> +
> /*
> * Architected system registers.
> * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
> @@ -736,6 +749,42 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
> NULL, reset_unknown, TPIDRRO_EL0 },
>
> + /* PMEVTYPERn_EL0 */
> + PMU_PMEVTYPER_EL0(0),
> + PMU_PMEVTYPER_EL0(1),
> + PMU_PMEVTYPER_EL0(2),
> + PMU_PMEVTYPER_EL0(3),
> + PMU_PMEVTYPER_EL0(4),
> + PMU_PMEVTYPER_EL0(5),
> + PMU_PMEVTYPER_EL0(6),
> + PMU_PMEVTYPER_EL0(7),
> + PMU_PMEVTYPER_EL0(8),
> + PMU_PMEVTYPER_EL0(9),
> + PMU_PMEVTYPER_EL0(10),
> + PMU_PMEVTYPER_EL0(11),
> + PMU_PMEVTYPER_EL0(12),
> + PMU_PMEVTYPER_EL0(13),
> + PMU_PMEVTYPER_EL0(14),
> + PMU_PMEVTYPER_EL0(15),
> + PMU_PMEVTYPER_EL0(16),
> + PMU_PMEVTYPER_EL0(17),
> + PMU_PMEVTYPER_EL0(18),
> + PMU_PMEVTYPER_EL0(19),
> + PMU_PMEVTYPER_EL0(20),
> + PMU_PMEVTYPER_EL0(21),
> + PMU_PMEVTYPER_EL0(22),
> + PMU_PMEVTYPER_EL0(23),
> + PMU_PMEVTYPER_EL0(24),
> + PMU_PMEVTYPER_EL0(25),
> + PMU_PMEVTYPER_EL0(26),
> + PMU_PMEVTYPER_EL0(27),
> + PMU_PMEVTYPER_EL0(28),
> + PMU_PMEVTYPER_EL0(29),
> + PMU_PMEVTYPER_EL0(30),
> + /* PMCCFILTR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
> + access_pmu_regs, reset_unknown, PMCCFILTR_EL0, },
> +
> /* DACR32_EL2 */
> { Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
> NULL, reset_unknown, DACR32_EL2 },
> @@ -934,6 +983,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
>
> if (p->is_write) {
> switch (r->reg) {
> + case c14_PMEVTYPER0 ... c14_PMCCFILTR: {
Same problem here.
> + val = r->reg - c14_PMEVTYPER0;
> + kvm_pmu_set_counter_event_type(vcpu, p->regval, val);
> + vcpu_cp15(vcpu, r->reg) = p->regval;
> + break;
> + }
> case c9_PMCR: {
> /* Only update writeable bits of PMCR */
> val = vcpu_cp15(vcpu, r->reg);
> @@ -967,6 +1022,13 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> return true;
> }
>
> +/* Macro to expand the PMEVTYPERn register */
> +#define PMU_PMEVTYPER(n) \
> + /* PMEVTYPERn */ \
> + { Op1(0), CRn(0b1110), \
> + CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
> + access_pmu_cp15_regs, NULL, (c14_PMEVTYPER0 + n), }
> +
> /*
> * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
> * depending on the way they are accessed (as a 32bit or a 64bit
> @@ -1022,6 +1084,42 @@ static const struct sys_reg_desc cp15_regs[] = {
> { Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
>
> { Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
> +
> + /* PMEVTYPERn */
> + PMU_PMEVTYPER(0),
> + PMU_PMEVTYPER(1),
> + PMU_PMEVTYPER(2),
> + PMU_PMEVTYPER(3),
> + PMU_PMEVTYPER(4),
> + PMU_PMEVTYPER(5),
> + PMU_PMEVTYPER(6),
> + PMU_PMEVTYPER(7),
> + PMU_PMEVTYPER(8),
> + PMU_PMEVTYPER(9),
> + PMU_PMEVTYPER(10),
> + PMU_PMEVTYPER(11),
> + PMU_PMEVTYPER(12),
> + PMU_PMEVTYPER(13),
> + PMU_PMEVTYPER(14),
> + PMU_PMEVTYPER(15),
> + PMU_PMEVTYPER(16),
> + PMU_PMEVTYPER(17),
> + PMU_PMEVTYPER(18),
> + PMU_PMEVTYPER(19),
> + PMU_PMEVTYPER(20),
> + PMU_PMEVTYPER(21),
> + PMU_PMEVTYPER(22),
> + PMU_PMEVTYPER(23),
> + PMU_PMEVTYPER(24),
> + PMU_PMEVTYPER(25),
> + PMU_PMEVTYPER(26),
> + PMU_PMEVTYPER(27),
> + PMU_PMEVTYPER(28),
> + PMU_PMEVTYPER(29),
> + PMU_PMEVTYPER(30),
> + /* PMCCFILTR */
> + { Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_cp15_regs,
> + NULL, c14_PMCCFILTR },
> };
>
> static const struct sys_reg_desc cp15_64_regs[] = {
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register
2015-12-08 12:47 ` [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register Shannon Zhao
@ 2015-12-08 16:30 ` Marc Zyngier
2015-12-10 11:36 ` Shannon Zhao
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 16:30 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> Since the reset value of PMEVCNTRn or PMCCNTR is UNKNOWN, use
> reset_unknown for its reset handler. Add access handler which emulates
> writing and reading PMEVCNTRn or PMCCNTR register. When reading
> PMEVCNTRn or PMCCNTR, call perf_event_read_value to get the count value
> of the perf event.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/kvm/sys_regs.c | 107 +++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 105 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c116a1b..f7a73b5 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -525,6 +525,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>
> if (p->is_write) {
> switch (r->reg) {
> + case PMEVCNTR0_EL0 ... PMCCNTR_EL0: {
Same problem as previously mentioned.
> + val = kvm_pmu_get_counter_value(vcpu,
> + r->reg - PMEVCNTR0_EL0);
> + vcpu_sys_reg(vcpu, r->reg) += (s64)p->regval - val;
> + break;
> + }
> case PMEVTYPER0_EL0 ... PMCCFILTR_EL0: {
> val = r->reg - PMEVTYPER0_EL0;
> kvm_pmu_set_counter_event_type(vcpu, p->regval, val);
> @@ -548,6 +554,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> }
> } else {
> switch (r->reg) {
> + case PMEVCNTR0_EL0 ... PMCCNTR_EL0: {
> + val = kvm_pmu_get_counter_value(vcpu,
> + r->reg - PMEVCNTR0_EL0);
> + p->regval = val;
> + break;
> + }
> case PMCR_EL0: {
> /* PMCR.P & PMCR.C are RAZ */
> val = vcpu_sys_reg(vcpu, r->reg)
> @@ -579,6 +591,13 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111), \
> trap_wcr, reset_wcr, n, 0, get_wcr, set_wcr }
>
> +/* Macro to expand the PMEVCNTRn_EL0 register */
> +#define PMU_PMEVCNTR_EL0(n) \
> + /* PMEVCNTRn_EL0 */ \
> + { Op0(0b11), Op1(0b011), CRn(0b1110), \
> + CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
> + access_pmu_regs, reset_unknown, (PMEVCNTR0_EL0 + n), }
> +
> /* Macro to expand the PMEVTYPERn_EL0 register */
> #define PMU_PMEVTYPER_EL0(n) \
> /* PMEVTYPERn_EL0 */ \
> @@ -779,7 +798,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
> /* PMCCNTR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
> - trap_raz_wi },
> + access_pmu_regs, reset_unknown, PMCCNTR_EL0 },
> /* PMXEVTYPER_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
> access_pmu_pmxevtyper },
> @@ -800,6 +819,38 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
> NULL, reset_unknown, TPIDRRO_EL0 },
>
> + /* PMEVCNTRn_EL0 */
> + PMU_PMEVCNTR_EL0(0),
> + PMU_PMEVCNTR_EL0(1),
> + PMU_PMEVCNTR_EL0(2),
> + PMU_PMEVCNTR_EL0(3),
> + PMU_PMEVCNTR_EL0(4),
> + PMU_PMEVCNTR_EL0(5),
> + PMU_PMEVCNTR_EL0(6),
> + PMU_PMEVCNTR_EL0(7),
> + PMU_PMEVCNTR_EL0(8),
> + PMU_PMEVCNTR_EL0(9),
> + PMU_PMEVCNTR_EL0(10),
> + PMU_PMEVCNTR_EL0(11),
> + PMU_PMEVCNTR_EL0(12),
> + PMU_PMEVCNTR_EL0(13),
> + PMU_PMEVCNTR_EL0(14),
> + PMU_PMEVCNTR_EL0(15),
> + PMU_PMEVCNTR_EL0(16),
> + PMU_PMEVCNTR_EL0(17),
> + PMU_PMEVCNTR_EL0(18),
> + PMU_PMEVCNTR_EL0(19),
> + PMU_PMEVCNTR_EL0(20),
> + PMU_PMEVCNTR_EL0(21),
> + PMU_PMEVCNTR_EL0(22),
> + PMU_PMEVCNTR_EL0(23),
> + PMU_PMEVCNTR_EL0(24),
> + PMU_PMEVCNTR_EL0(25),
> + PMU_PMEVCNTR_EL0(26),
> + PMU_PMEVCNTR_EL0(27),
> + PMU_PMEVCNTR_EL0(28),
> + PMU_PMEVCNTR_EL0(29),
> + PMU_PMEVCNTR_EL0(30),
> /* PMEVTYPERn_EL0 */
> PMU_PMEVTYPER_EL0(0),
> PMU_PMEVTYPER_EL0(1),
> @@ -1034,6 +1085,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
>
> if (p->is_write) {
> switch (r->reg) {
> + case c14_PMEVCNTR0 ... c9_PMCCNTR: {
> + val = kvm_pmu_get_counter_value(vcpu,
> + r->reg - c14_PMEVCNTR0);
> + vcpu_cp15(vcpu, r->reg) += (s64)p->regval - val;
OK, we do have an interesting problem here. On 32bit, the cycle counter
can be accessed both as a 32bit or a 64bit register (ARMv8 ARM G6.4.2).
Here, you're happily truncating it, without paying attention to the size
of the access.
Please have a look at the way we handle c2_TTBR0, that will give you an
idea of how to deal with it.
> + break;
> + }
> case c14_PMEVTYPER0 ... c14_PMCCFILTR: {
> val = r->reg - c14_PMEVTYPER0;
> kvm_pmu_set_counter_event_type(vcpu, p->regval, val);
> @@ -1057,6 +1114,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> }
> } else {
> switch (r->reg) {
> + case c14_PMEVCNTR0 ... c9_PMCCNTR: {
> + val = kvm_pmu_get_counter_value(vcpu,
> + r->reg - c14_PMEVCNTR0);
> + p->regval = val;
> + break;
> + }
Same here.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 11/21] KVM: ARM64: Add access handler for PMXEVCNTR register
2015-12-08 12:47 ` [PATCH v6 11/21] KVM: ARM64: Add access handler for PMXEVCNTR register Shannon Zhao
@ 2015-12-08 16:33 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 16:33 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> Accessing PMXEVCNTR register is mapped to the PMEVCNTRn or PMCCNTR which
> is selected by PMSELR.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/kvm/sys_regs.c | 44 ++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 42 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f7a73b5..2304937 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -516,6 +516,46 @@ out:
> return true;
> }
>
> +static bool access_pmu_pmxevcntr(struct kvm_vcpu *vcpu,
> + struct sys_reg_params *p,
> + const struct sys_reg_desc *r)
> +{
> + u64 pmcr, idx, val;
> +
> + if (!vcpu_mode_is_32bit(vcpu)) {
> + pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
> + idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> +
> + if (!pmu_counter_idx_valid(pmcr, idx))
> + goto out;
> +
> + val = kvm_pmu_get_counter_value(vcpu, idx);
> + if (!p->is_write) {
> + p->regval = val;
> + goto out;
> + }
> +
> + vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + idx) += (s64)p->regval - val;
> + } else {
> + pmcr = vcpu_cp15(vcpu, c9_PMCR);
> + idx = vcpu_cp15(vcpu, c9_PMSELR) & ARMV8_COUNTER_MASK;
> +
> + if (!pmu_counter_idx_valid(pmcr, idx))
> + goto out;
> +
> + val = kvm_pmu_get_counter_value(vcpu, idx);
> + if (!p->is_write) {
> + p->regval = val;
> + goto out;
> + }
> +
> + vcpu_cp15(vcpu, c14_PMEVCNTR0 + idx) += (s64)p->regval - val;
> + }
> +
> +out:
> + return true;
> +}
There is definitely some common code with the handling of PMEVCNTRn
here. Can you please factor it ?
> +
> /* PMU registers accessor. */
> static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> struct sys_reg_params *p,
> @@ -804,7 +844,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> access_pmu_pmxevtyper },
> /* PMXEVCNTR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
> - trap_raz_wi },
> + access_pmu_pmxevcntr },
> /* PMUSERENR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
> trap_raz_wi },
> @@ -1192,7 +1232,7 @@ static const struct sys_reg_desc cp15_regs[] = {
> { Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_cp15_regs,
> NULL, c9_PMCCNTR },
> { Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_pmxevtyper },
> - { Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
> + { Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_pmxevcntr },
> { Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
> { Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
> { Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 12/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register
2015-12-08 12:47 ` [PATCH v6 12/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register Shannon Zhao
@ 2015-12-08 16:42 ` Marc Zyngier
2015-12-09 8:35 ` Shannon Zhao
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 16:42 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
> reset_unknown for its reset handler. Add a new case to emulate writing
> PMCNTENSET or PMCNTENCLR register.
>
> When writing to PMCNTENSET, call perf_event_enable to enable the perf
> event. When writing to PMCNTENCLR, call perf_event_disable to disable
> the perf event.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/kvm/sys_regs.c | 39 +++++++++++++++++++++++++++++++++++----
> include/kvm/arm_pmu.h | 4 ++++
> virt/kvm/arm/pmu.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 86 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2304937..a780cb5 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -577,6 +577,21 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> vcpu_sys_reg(vcpu, r->reg) = p->regval;
> break;
> }
> + case PMCNTENSET_EL0: {
> + val = p->regval;
> + if (r->Op2 == 1) {
> + /* accessing PMCNTENSET_EL0 */
> + kvm_pmu_enable_counter(vcpu, val,
> + vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E);
> + vcpu_sys_reg(vcpu, r->reg) |= val;
> + } else {
> +
> + /* accessing PMCNTENCLR_EL0 */
> + kvm_pmu_disable_counter(vcpu, val);
> + vcpu_sys_reg(vcpu, r->reg) &= ~val;
> + }
> + break;
> + }
> case PMCR_EL0: {
> /* Only update writeable bits of PMCR */
> val = vcpu_sys_reg(vcpu, r->reg);
> @@ -817,10 +832,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> access_pmu_regs, reset_pmcr, PMCR_EL0, },
> /* PMCNTENSET_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> - trap_raz_wi },
> + access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
> /* PMCNTENCLR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
> - trap_raz_wi },
> + access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
> /* PMOVSCLR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
> trap_raz_wi },
> @@ -1137,6 +1152,20 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> vcpu_cp15(vcpu, r->reg) = p->regval;
> break;
> }
> + case c9_PMCNTENSET: {
> + val = p->regval;
> + if (r->Op2 == 1) {
> + /* accessing c9_PMCNTENSET */
> + kvm_pmu_enable_counter(vcpu, val,
> + vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_E);
> + vcpu_cp15(vcpu, r->reg) |= val;
> + } else {
> + /* accessing c9_PMCNTENCLR */
> + kvm_pmu_disable_counter(vcpu, val);
> + vcpu_cp15(vcpu, r->reg) &= ~val;
> + }
> + break;
> + }
> case c9_PMCR: {
> /* Only update writeable bits of PMCR */
> val = vcpu_cp15(vcpu, r->reg);
> @@ -1220,8 +1249,10 @@ static const struct sys_reg_desc cp15_regs[] = {
> /* PMU */
> { Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
> NULL, c9_PMCR },
> - { Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
> - { Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
> + { Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmu_cp15_regs,
> + NULL, c9_PMCNTENSET },
> + { Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
> + NULL, c9_PMCNTENSET },
> { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
> NULL, c9_PMSELR },
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 36bde48..e731656 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -39,6 +39,8 @@ struct kvm_pmu {
>
> #ifdef CONFIG_KVM_ARM_PMU
> u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
> +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
> void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> u32 select_idx);
> #else
> @@ -46,6 +48,8 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> {
> return 0;
> }
> +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
> void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> u32 select_idx) {}
> #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 15babf1..45586d2 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -90,6 +90,53 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
> }
>
> /**
> + * kvm_pmu_enable_counter - enable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCNTENSET register
> + * @all_enable: the value of PMCR.E
> + *
> + * Call perf_event_enable to start counting the perf event
> + */
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable)
> +{
> + int i;
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> + struct kvm_pmc *pmc;
> +
> + if (!all_enable)
> + return;
You have the vcpu. Can you move the check for PMCR_EL0.E here instead of
having it in both of the callers?
> +
> + for_each_set_bit(i, (const unsigned long *)&val, ARMV8_MAX_COUNTERS) {
Nonononono... If you must have to use a long, use a long. Don't cast it
to a different type (hint: big endian).
> + pmc = &pmu->pmc[i];
> + if (pmc->perf_event) {
> + perf_event_enable(pmc->perf_event);
> + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> + kvm_debug("fail to enable perf event\n");
> + }
> + }
> +}
> +
> +/**
> + * kvm_pmu_disable_counter - disable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCNTENCLR register
> + *
> + * Call perf_event_disable to stop counting the perf event
> + */
> +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
> +{
> + int i;
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> + struct kvm_pmc *pmc;
> +
Why are enable and disable asymmetric (handling of PMCR.E)?
> + for_each_set_bit(i, (const unsigned long *)&val, ARMV8_MAX_COUNTERS) {
> + pmc = &pmu->pmc[i];
> + if (pmc->perf_event)
> + perf_event_disable(pmc->perf_event);
> + }
> +}
> +
> +/**
> * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> * @vcpu: The vcpu pointer
> * @data: The data guest writes to PMXEVTYPER_EL0
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 14/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register
2015-12-08 12:47 ` [PATCH v6 14/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register Shannon Zhao
@ 2015-12-08 16:59 ` Marc Zyngier
2015-12-09 8:47 ` Shannon Zhao
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 16:59 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
> reset_unknown for its reset handler. Add a new case to emulate writing
> PMOVSSET or PMOVSCLR register.
>
> When writing non-zero value to PMOVSSET, pend PMU interrupt. When the
> value writing to PMOVSCLR is equal to the current value, clear the PMU
> pending interrupt.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/kvm/sys_regs.c | 27 ++++++++++++++++--
> include/kvm/arm_pmu.h | 4 +++
> virt/kvm/arm/pmu.c | 72 +++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 100 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c1dffb2..c830fde 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -601,6 +601,15 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> vcpu_sys_reg(vcpu, r->reg) &= ~p->regval;
> break;
> }
> + case PMOVSSET_EL0: {
> + if (r->CRm == 14)
> + /* accessing PMOVSSET_EL0 */
> + kvm_pmu_overflow_set(vcpu, p->regval);
> + else
> + /* accessing PMOVSCLR_EL0 */
> + kvm_pmu_overflow_clear(vcpu, p->regval);
> + break;
> + }
> case PMCR_EL0: {
> /* Only update writeable bits of PMCR */
> val = vcpu_sys_reg(vcpu, r->reg);
> @@ -847,7 +856,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
> /* PMOVSCLR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
> - trap_raz_wi },
> + access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
> /* PMSWINC_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
> trap_raz_wi },
> @@ -874,7 +883,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> trap_raz_wi },
> /* PMOVSSET_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
> - trap_raz_wi },
> + access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
>
> /* TPIDR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
> @@ -1184,6 +1193,15 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> vcpu_cp15(vcpu, r->reg) &= ~p->regval;
> break;
> }
> + case c9_PMOVSSET: {
> + if (r->CRm == 14)
> + /* accessing c9_PMOVSSET */
> + kvm_pmu_overflow_set(vcpu, p->regval);
> + else
> + /* accessing c9_PMOVSCLR */
> + kvm_pmu_overflow_clear(vcpu, p->regval);
> + break;
> + }
> case c9_PMCR: {
> /* Only update writeable bits of PMCR */
> val = vcpu_cp15(vcpu, r->reg);
> @@ -1271,7 +1289,8 @@ static const struct sys_reg_desc cp15_regs[] = {
> NULL, c9_PMCNTENSET },
> { Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
> NULL, c9_PMCNTENSET },
> - { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> + { Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
> + NULL, c9_PMOVSSET },
> { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
> NULL, c9_PMSELR },
> { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
> @@ -1287,6 +1306,8 @@ static const struct sys_reg_desc cp15_regs[] = {
> NULL, c9_PMINTENSET },
> { Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
> NULL, c9_PMINTENSET },
> + { Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmu_cp15_regs,
> + NULL, c9_PMOVSSET },
>
> { Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
> { Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index e731656..a76df52 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -41,6 +41,8 @@ struct kvm_pmu {
> u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
> void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
> void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
> +void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val);
> +void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
> void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> u32 select_idx);
> #else
> @@ -50,6 +52,8 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> }
> void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
> void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
> +void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val) {}
> +void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
> void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> u32 select_idx) {}
> #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 45586d2..ba7d11c 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -136,6 +136,78 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
> }
> }
>
> +static u32 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> +{
> + u32 val;
> +
> + if (!vcpu_mode_is_32bit(vcpu))
> + val = (vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT)
> + & ARMV8_PMCR_N_MASK;
> + else
> + val = (vcpu_cp15(vcpu, c9_PMCR) >> ARMV8_PMCR_N_SHIFT)
> + & ARMV8_PMCR_N_MASK;
Indentation? Again, no need to distinguish 32 and 64bit here.
> +
> + return GENMASK(val - 1, 0) | BIT(ARMV8_COUNTER_MASK);
> +}
> +
> +/**
> + * kvm_pmu_overflow_clear - clear PMU overflow interrupt
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMOVSCLR register
> + * @reg: the current value of PMOVSCLR register
> + */
> +void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val)
> +{
> + u32 mask = kvm_pmu_valid_counter_mask(vcpu);
> +
> + if (!vcpu_mode_is_32bit(vcpu)) {
> + vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask;
> + vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~val;
> + val = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
I'm not fond of messing with the sysreg like this. Consider using a
temporary variable.
> + } else {
> + vcpu_cp15(vcpu, c9_PMOVSSET) &= mask;
> + vcpu_cp15(vcpu, c9_PMOVSSET) &= ~val;
> + val = vcpu_cp15(vcpu, c9_PMOVSSET);
Same here.
> + }
> +
> + /* If all overflow bits are cleared, kick the vcpu to clear interrupt
> + * pending status.
> + */
> + if (val == 0)
> + kvm_vcpu_kick(vcpu);
Do we really need to do so? This will be dropped on the next entry
anyway, so i don't see the need to kick the vcpu again. Or am I missing
something?
> +}
> +
> +/**
> + * kvm_pmu_overflow_set - set PMU overflow interrupt
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMOVSSET register
> + */
> +void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
> +{
> + u32 mask = kvm_pmu_valid_counter_mask(vcpu);
> +
> + val &= mask;
> + if (val == 0)
> + return;
> +
> + if (!vcpu_mode_is_32bit(vcpu)) {
> + vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask;
> + vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= val;
> + val = vcpu_sys_reg(vcpu, PMCNTENSET_EL0)
> + & vcpu_sys_reg(vcpu, PMINTENSET_EL1)
> + & vcpu_sys_reg(vcpu, PMOVSSET_EL0);
Same here.
> + } else {
> + vcpu_cp15(vcpu, c9_PMOVSSET) &= mask;
> + vcpu_cp15(vcpu, c9_PMOVSSET) |= val;
> + val = vcpu_cp15(vcpu, c9_PMCNTENSET)
> + & vcpu_cp15(vcpu, c9_PMINTENSET)
> + & vcpu_cp15(vcpu, c9_PMOVSSET);
and here.
> + }
> +
> + if (val != 0)
> + kvm_vcpu_kick(vcpu);
> +}
> +
> /**
> * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> * @vcpu: The vcpu pointer
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 15/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register
2015-12-08 12:47 ` [PATCH v6 15/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register Shannon Zhao
@ 2015-12-08 17:03 ` Marc Zyngier
2015-12-09 9:18 ` Shannon Zhao
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 17:03 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> The reset value of PMUSERENR_EL0 is UNKNOWN, use reset_unknown.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/kvm/sys_regs.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c830fde..80b66c0 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -880,7 +880,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> access_pmu_pmxevcntr },
> /* PMUSERENR_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
> - trap_raz_wi },
> + access_pmu_regs, reset_unknown, PMUSERENR_EL0 },
So while the 64bit view of the register resets as unknown, a CPU
resetting in 32bit mode resets as 0. I suggest you reset it as zero, and
document that choice. You may have to revisit all the other registers
that do reset as unknown for 64bit as well.
> /* PMOVSSET_EL0 */
> { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
> access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
> @@ -1301,7 +1301,8 @@ static const struct sys_reg_desc cp15_regs[] = {
> NULL, c9_PMCCNTR },
> { Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_pmxevtyper },
> { Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_pmxevcntr },
> - { Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
> + { Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmu_cp15_regs,
> + NULL, c9_PMUSERENR, 0 },
> { Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
> NULL, c9_PMINTENSET },
> { Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 17/21] KVM: ARM64: Add helper to handle PMCR register bits
2015-12-08 12:47 ` [PATCH v6 17/21] KVM: ARM64: Add helper to handle PMCR register bits Shannon Zhao
@ 2015-12-08 17:36 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 17:36 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
> enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
> disabled. When writing 1 to PMCR.P, reset all event counters, not
> including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
> zero.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm64/kvm/sys_regs.c | 2 ++
> include/kvm/arm_pmu.h | 2 ++
> virt/kvm/arm/pmu.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 55 insertions(+)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 9baa654..110b288 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -620,6 +620,7 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> val &= ~ARMV8_PMCR_MASK;
> val |= p->regval & ARMV8_PMCR_MASK;
> vcpu_sys_reg(vcpu, r->reg) = val;
> + kvm_pmu_handle_pmcr(vcpu, val);
> break;
> }
> case PMCEID0_EL0:
> @@ -1218,6 +1219,7 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> val &= ~ARMV8_PMCR_MASK;
> val |= p->regval & ARMV8_PMCR_MASK;
> vcpu_cp15(vcpu, r->reg) = val;
> + kvm_pmu_handle_pmcr(vcpu, val);
> break;
> }
> case c9_PMCEID0:
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index d12450a..a131f76 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -46,6 +46,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
> void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
> void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> u32 select_idx);
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
> #else
> u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> {
> @@ -58,6 +59,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
> void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
> void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> u32 select_idx) {}
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val) {}
> #endif
>
> #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 093e211..9b9c706 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -151,6 +151,57 @@ static u32 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> }
>
> /**
> + * kvm_pmu_handle_pmcr - handle PMCR register
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCR register
> + */
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val)
> +{
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> + struct kvm_pmc *pmc;
> + u32 enable;
> + int i;
> +
> + if (val & ARMV8_PMCR_E) {
> + if (!vcpu_mode_is_32bit(vcpu))
> + enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
This returns a 64bit quantity. Please explicitly select the low 32bits.
> + else
> + enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
> +
> + kvm_pmu_enable_counter(vcpu, enable, true);
It really feels like there is some common stuff with the handling of
PNCTENSET,
> + } else {
> + kvm_pmu_disable_counter(vcpu, 0xffffffffUL);
> + }
> +
> + if (val & ARMV8_PMCR_C) {
> + pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
Nit: it would be nice to have a #define for the cycle counter.
> + if (pmc->perf_event)
> + local64_set(&pmc->perf_event->count, 0);
> + if (!vcpu_mode_is_32bit(vcpu))
> + vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
> + else
> + vcpu_cp15(vcpu, c9_PMCCNTR) = 0;
> + }
> +
> + if (val & ARMV8_PMCR_P) {
> + for (i = 0; i < ARMV8_MAX_COUNTERS - 1; i++) {
> + pmc = &pmu->pmc[i];
> + if (pmc->perf_event)
> + local64_set(&pmc->perf_event->count, 0);
> + if (!vcpu_mode_is_32bit(vcpu))
> + vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
> + else
> + vcpu_cp15(vcpu, c14_PMEVCNTR0 + i) = 0;
> + }
> + }
> +
> + if (val & ARMV8_PMCR_LC) {
> + pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
> + pmc->bitmask = 0xffffffffffffffffUL;
> + }
> +}
> +
> +/**
> * kvm_pmu_overflow_clear - clear PMU overflow interrupt
> * @vcpu: The vcpu pointer
> * @val: the value guest writes to PMOVSCLR register
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 18/21] KVM: ARM64: Add PMU overflow interrupt routing
2015-12-08 12:47 ` [PATCH v6 18/21] KVM: ARM64: Add PMU overflow interrupt routing Shannon Zhao
@ 2015-12-08 17:37 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 17:37 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> When calling perf_event_create_kernel_counter to create perf_event,
> assign a overflow handler. Then when perf event overflows, call
> kvm_vcpu_kick() to sync the interrupt.
Please update the commit message, things have changed quite a bit now.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> arch/arm/kvm/arm.c | 2 ++
> include/kvm/arm_pmu.h | 2 ++
> virt/kvm/arm/pmu.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++-
> 3 files changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index e06fd29..cd696ef 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -28,6 +28,7 @@
> #include <linux/sched.h>
> #include <linux/kvm.h>
> #include <trace/events/kvm.h>
> +#include <kvm/arm_pmu.h>
>
> #define CREATE_TRACE_POINTS
> #include "trace.h"
> @@ -569,6 +570,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> * non-preemptible context.
> */
> preempt_disable();
> + kvm_pmu_flush_hwstate(vcpu);
> kvm_timer_flush_hwstate(vcpu);
> kvm_vgic_flush_hwstate(vcpu);
>
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index a131f76..c4041008 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -38,6 +38,7 @@ struct kvm_pmu {
> };
>
> #ifdef CONFIG_KVM_ARM_PMU
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
> u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
> void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
> void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
> @@ -48,6 +49,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> u32 select_idx);
> void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
> #else
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
> u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> {
> return 0;
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 9b9c706..ff182d6 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -21,6 +21,7 @@
> #include <linux/perf_event.h>
> #include <asm/kvm_emulate.h>
> #include <kvm/arm_pmu.h>
> +#include <kvm/arm_vgic.h>
>
> /**
> * kvm_pmu_get_counter_value - get PMU counter value
> @@ -90,6 +91,54 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
> }
>
> /**
> + * kvm_pmu_flush_hwstate - flush pmu state to cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
> + */
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> + u32 overflow;
> +
> + if (pmu->irq_num == -1)
> + return;
> +
> + if (!vcpu_mode_is_32bit(vcpu)) {
> + if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E))
> + return;
> +
> + overflow = vcpu_sys_reg(vcpu, PMCNTENSET_EL0)
> + & vcpu_sys_reg(vcpu, PMINTENSET_EL1)
> + & vcpu_sys_reg(vcpu, PMOVSSET_EL0);
> + } else {
> + if (!(vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_E))
> + return;
> +
> + overflow = vcpu_cp15(vcpu, c9_PMCNTENSET)
> + & vcpu_cp15(vcpu, c9_PMINTENSET)
> + & vcpu_cp15(vcpu, c9_PMOVSSET);
> + }
> +
> + kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num,
> + overflow ? 1 : 0);
> +}
> +
> +/**
> + * When perf event overflows, call kvm_pmu_overflow_set to set overflow status.
> + */
> +static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
> + struct perf_sample_data *data,
> + struct pt_regs *regs)
> +{
> + struct kvm_pmc *pmc = perf_event->overflow_handler_context;
> + struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
> + int idx = pmc->idx;
> +
> + kvm_pmu_overflow_set(vcpu, BIT(idx));
> +}
> +
> +/**
> * kvm_pmu_enable_counter - enable selected PMU counter
> * @vcpu: The vcpu pointer
> * @val: the value guest writes to PMCNTENSET register
> @@ -341,7 +390,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> /* The initial sample period (overflow count) of an event. */
> attr.sample_period = (-counter) & pmc->bitmask;
>
> - event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> + event = perf_event_create_kernel_counter(&attr, -1, current,
> + kvm_pmu_perf_overflow, pmc);
> if (IS_ERR(event)) {
> printk_once("kvm: pmu event creation failed %ld\n",
> PTR_ERR(event));
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 21/21] KVM: ARM64: Add a new kvm ARM PMU device
2015-12-08 12:47 ` [PATCH v6 21/21] KVM: ARM64: Add a new kvm ARM PMU device Shannon Zhao
@ 2015-12-08 17:43 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 17:43 UTC (permalink / raw)
To: linux-arm-kernel
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> Add a new kvm device type KVM_DEV_TYPE_ARM_PMU_V3 for ARM PMU. Implement
> the kvm_device_ops for it.
>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
> Documentation/virtual/kvm/devices/arm-pmu.txt | 16 +++++
> arch/arm64/include/uapi/asm/kvm.h | 3 +
> include/linux/kvm_host.h | 1 +
> include/uapi/linux/kvm.h | 2 +
> virt/kvm/arm/pmu.c | 93 +++++++++++++++++++++++++++
> virt/kvm/kvm_main.c | 4 ++
> 6 files changed, 119 insertions(+)
> create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
>
> diff --git a/Documentation/virtual/kvm/devices/arm-pmu.txt b/Documentation/virtual/kvm/devices/arm-pmu.txt
> new file mode 100644
> index 0000000..5121f1f
> --- /dev/null
> +++ b/Documentation/virtual/kvm/devices/arm-pmu.txt
> @@ -0,0 +1,16 @@
> +ARM Virtual Performance Monitor Unit (vPMU)
> +===========================================
> +
> +Device types supported:
> + KVM_DEV_TYPE_ARM_PMU_V3 ARM Performance Monitor Unit v3
> +
> +Instantiate one PMU instance for per VCPU through this API.
> +
> +Groups:
> + KVM_DEV_ARM_PMU_GRP_IRQ
> + Attributes:
> + A value describing the interrupt number of PMU overflow interrupt. This
> + interrupt should be a PPI.
> +
> + Errors:
> + -EINVAL: Value set is out of the expected range (from 16 to 31)
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 2d4ca4b..568afa2 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -204,6 +204,9 @@ struct kvm_arch_memory_slot {
> #define KVM_DEV_ARM_VGIC_GRP_CTRL 4
> #define KVM_DEV_ARM_VGIC_CTRL_INIT 0
>
> +/* Device Control API: ARM PMU */
> +#define KVM_DEV_ARM_PMU_GRP_IRQ 0
> +
> /* KVM_IRQ_LINE irq field index values */
> #define KVM_ARM_IRQ_TYPE_SHIFT 24
> #define KVM_ARM_IRQ_TYPE_MASK 0xff
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index c923350..608dea6 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1161,6 +1161,7 @@ extern struct kvm_device_ops kvm_mpic_ops;
> extern struct kvm_device_ops kvm_xics_ops;
> extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
> extern struct kvm_device_ops kvm_arm_vgic_v3_ops;
> +extern struct kvm_device_ops kvm_arm_pmu_ops;
>
> #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
>
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 03f3618..4ba6fdd 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1032,6 +1032,8 @@ enum kvm_device_type {
> #define KVM_DEV_TYPE_FLIC KVM_DEV_TYPE_FLIC
> KVM_DEV_TYPE_ARM_VGIC_V3,
> #define KVM_DEV_TYPE_ARM_VGIC_V3 KVM_DEV_TYPE_ARM_VGIC_V3
> + KVM_DEV_TYPE_ARM_PMU_V3,
> +#define KVM_DEV_TYPE_ARM_PMU_V3 KVM_DEV_TYPE_ARM_PMU_V3
> KVM_DEV_TYPE_MAX,
> };
>
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index f8007c7..a84a4d7 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -19,10 +19,13 @@
> #include <linux/kvm.h>
> #include <linux/kvm_host.h>
> #include <linux/perf_event.h>
> +#include <linux/uaccess.h>
> #include <asm/kvm_emulate.h>
> #include <kvm/arm_pmu.h>
> #include <kvm/arm_vgic.h>
>
> +#include "vgic.h"
> +
> /**
> * kvm_pmu_get_counter_value - get PMU counter value
> * @vcpu: The vcpu pointer
> @@ -438,3 +441,93 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>
> pmc->perf_event = event;
> }
> +
> +static inline bool kvm_arm_pmu_initialized(struct kvm_vcpu *vcpu)
> +{
> + return vcpu->arch.pmu.irq_num != -1;
> +}
> +
> +static int kvm_arm_pmu_set_irq(struct kvm *kvm, int irq)
> +{
> + int j;
> + struct kvm_vcpu *vcpu;
> +
> + kvm_for_each_vcpu(j, vcpu, kvm) {
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> + if (!kvm_arm_pmu_initialized(vcpu))
> + kvm_debug("Set kvm ARM PMU irq: %d\n", irq);
> + pmu->irq_num = irq;
Missing braces? Also, you should consider returning an error if it has
been initialized twice.
> + }
> +
> + return 0;
> +}
> +
> +static int kvm_arm_pmu_create(struct kvm_device *dev, u32 type)
> +{
> + int i;
> + struct kvm_vcpu *vcpu;
> + struct kvm *kvm = dev->kvm;
> +
> + kvm_for_each_vcpu(i, vcpu, kvm) {
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> + memset(pmu, 0, sizeof(*pmu));
> + kvm_pmu_vcpu_reset(vcpu);
> + pmu->irq_num = -1;
> + }
> +
> + return 0;
> +}
> +
> +static void kvm_arm_pmu_destroy(struct kvm_device *dev)
> +{
> + kfree(dev);
> +}
> +
> +static int kvm_arm_pmu_set_attr(struct kvm_device *dev,
> + struct kvm_device_attr *attr)
> +{
> + switch (attr->group) {
> + case KVM_DEV_ARM_PMU_GRP_IRQ: {
> + int __user *uaddr = (int __user *)(long)attr->addr;
> + int reg;
> +
> + if (get_user(reg, uaddr))
> + return -EFAULT;
> +
> + if (reg < VGIC_NR_SGIS || reg >= VGIC_NR_PRIVATE_IRQS)
> + return -EINVAL;
> +
> + return kvm_arm_pmu_set_irq(dev->kvm, reg);
> + }
> + }
> +
> + return -ENXIO;
> +}
> +
> +static int kvm_arm_pmu_get_attr(struct kvm_device *dev,
> + struct kvm_device_attr *attr)
> +{
Shouldn't you be able to retrieve the configured interrupt?
> + return 0;
> +}
> +
> +static int kvm_arm_pmu_has_attr(struct kvm_device *dev,
> + struct kvm_device_attr *attr)
> +{
> + switch (attr->group) {
> + case KVM_DEV_ARM_PMU_GRP_IRQ:
> + return 0;
> + }
> +
> + return -ENXIO;
> +}
> +
> +struct kvm_device_ops kvm_arm_pmu_ops = {
> + .name = "kvm-arm-pmu",
> + .create = kvm_arm_pmu_create,
> + .destroy = kvm_arm_pmu_destroy,
> + .set_attr = kvm_arm_pmu_set_attr,
> + .get_attr = kvm_arm_pmu_get_attr,
> + .has_attr = kvm_arm_pmu_has_attr,
> +};
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 484079e..81a42cc 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2647,6 +2647,10 @@ static struct kvm_device_ops *kvm_device_ops_table[KVM_DEV_TYPE_MAX] = {
> #ifdef CONFIG_KVM_XICS
> [KVM_DEV_TYPE_XICS] = &kvm_xics_ops,
> #endif
> +
> +#ifdef CONFIG_KVM_ARM_PMU
> + [KVM_DEV_TYPE_ARM_PMU_V3] = &kvm_arm_pmu_ops,
> +#endif
> };
>
> int kvm_register_device_ops(struct kvm_device_ops *ops, u32 type)
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 00/21] KVM: ARM64: Add guest PMU support
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
` (20 preceding siblings ...)
2015-12-08 12:47 ` [PATCH v6 21/21] KVM: ARM64: Add a new kvm ARM PMU device Shannon Zhao
@ 2015-12-08 17:56 ` Marc Zyngier
21 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-08 17:56 UTC (permalink / raw)
To: linux-arm-kernel
Shannon,
On 08/12/15 12:47, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> This patchset adds guest PMU support for KVM on ARM64. It takes
> trap-and-emulate approach. When guest wants to monitor one event, it
> will be trapped by KVM and KVM will call perf_event API to create a perf
> event and call relevant perf_event APIs to get the count value of event.
There is still some work to do. A number of bugs, some design issues,
and in general a lot of tidying up to do. If you want this in 4.5,
you're going to have to address this quickly.
But quickly doesn't mean rushing it, and I have the feeling that this is
what you've done over the past week. Also, I've reviewed this series 3
times during that time, and I feel a bit exhausted.
So please do not send another one before the end of the week. take the
time to address all the comments, ask questions if needed, and hopefully
I can review it again next Monday.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
2015-12-08 15:43 ` Marc Zyngier
@ 2015-12-09 7:38 ` Shannon Zhao
2015-12-09 8:23 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-09 7:38 UTC (permalink / raw)
To: linux-arm-kernel
On 2015/12/8 23:43, Marc Zyngier wrote:
> On 08/12/15 12:47, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>> +/**
>> + * kvm_pmu_get_counter_value - get PMU counter value
>> + * @vcpu: The vcpu pointer
>> + * @select_idx: The counter index
>> + */
>> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>> +{
>> + u64 counter, enabled, running;
>> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> + struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>> +
>> + if (!vcpu_mode_is_32bit(vcpu))
>> + counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
>> + else
>> + counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
>> +
>> + if (pmc->perf_event)
>> + counter += perf_event_read_value(pmc->perf_event, &enabled,
>> + &running);
>> +
>> + return counter & pmc->bitmask;
>
> This one confused me for a while. Is it the case that you return
> whatever is in the vcpu view of the counter, plus anything that perf
> itself has counted? If so, I'd appreciate a comment here...
>
Yes, the real counter value is the current counter value plus the value
perf event counts. I'll add a comment.
>> +}
>> +
>> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u32 select_idx)
>> +{
>> + if (!vcpu_mode_is_32bit(vcpu))
>> + return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &
>> + (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) >> select_idx);
>
> This looks wrong. Shouldn't it be:
>
> return ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & (1 << select_idx)));
>
>> + else
>> + return (vcpu_sys_reg(vcpu, c9_PMCR) & ARMV8_PMCR_E) &
>> + (vcpu_sys_reg(vcpu, c9_PMCNTENSET) >> select_idx);
>> +}
>
> Also, I don't really see why we need to check the 32bit version, which
> has the exact same content.
>
>> +
>> +static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
>> +{
>> + struct kvm_pmu *pmu;
>> + struct kvm_vcpu_arch *vcpu_arch;
>> +
>> + pmc -= pmc->idx;
>> + pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
>> + vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
>> + return container_of(vcpu_arch, struct kvm_vcpu, arch);
>> +}
>> +
>> +/**
>> + * kvm_pmu_stop_counter - stop PMU counter
>> + * @pmc: The PMU counter pointer
>> + *
>> + * If this counter has been configured to monitor some event, release it here.
>> + */
>> +static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>> +{
>> + struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
>> + u64 counter;
>> +
>> + if (pmc->perf_event) {
>> + counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>> + if (!vcpu_mode_is_32bit(vcpu))
>> + vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
>> + else
>> + vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
>
> Same thing - we don't need to make a difference between 32 and 64bit.
>
So it's fine to drop all the vcpu_mode_is_32bit(vcpu) check of this
series? The only one we should take care is the PMCCNTR, right?
>> +
>> + perf_event_release_kernel(pmc->perf_event);
>> + pmc->perf_event = NULL;
>> + }
>> +}
>> +
>> +/**
>> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
>> + * @vcpu: The vcpu pointer
>> + * @data: The data guest writes to PMXEVTYPER_EL0
>> + * @select_idx: The number of selected counter
>> + *
>> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
>> + * event with given hardware event number. Here we call perf_event API to
>> + * emulate this action and create a kernel perf event for it.
>> + */
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> + u32 select_idx)
>> +{
>> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> + struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>> + struct perf_event *event;
>> + struct perf_event_attr attr;
>> + u32 eventsel;
>> + u64 counter;
>> +
>> + kvm_pmu_stop_counter(pmc);
>
> Wait. I didn't realize this before, but you have the vcpu right here.
> Why don't you pass it as a parameter to kvm_pmu_stop_counter and avoid
> the kvm_pmc_to_vcpu thing altogether?
>
Yeah, we could pass vcpu as a parameter for this function. But the
kvm_pmc_to_vcpu helper is also used in kvm_pmu_perf_overflow() and
within kvm_pmu_perf_overflow it needs the pmc->idx, we couldn't pass
vcpu as a parameter, so this helper is necessary for kvm_pmu_perf_overflow.
Thanks,
--
Shannon
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
2015-12-09 7:38 ` Shannon Zhao
@ 2015-12-09 8:23 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-09 8:23 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 9 Dec 2015 15:38:09 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>
>
> On 2015/12/8 23:43, Marc Zyngier wrote:
> > On 08/12/15 12:47, Shannon Zhao wrote:
> >> From: Shannon Zhao <shannon.zhao@linaro.org>
> >> +/**
> >> + * kvm_pmu_get_counter_value - get PMU counter value
> >> + * @vcpu: The vcpu pointer
> >> + * @select_idx: The counter index
> >> + */
> >> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> >> +{
> >> + u64 counter, enabled, running;
> >> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >> + struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >> +
> >> + if (!vcpu_mode_is_32bit(vcpu))
> >> + counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
> >> + else
> >> + counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
> >> +
> >> + if (pmc->perf_event)
> >> + counter += perf_event_read_value(pmc->perf_event, &enabled,
> >> + &running);
> >> +
> >> + return counter & pmc->bitmask;
> >
> > This one confused me for a while. Is it the case that you return
> > whatever is in the vcpu view of the counter, plus anything that perf
> > itself has counted? If so, I'd appreciate a comment here...
> >
> Yes, the real counter value is the current counter value plus the value
> perf event counts. I'll add a comment.
>
> >> +}
> >> +
> >> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u32 select_idx)
> >> +{
> >> + if (!vcpu_mode_is_32bit(vcpu))
> >> + return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &
> >> + (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) >> select_idx);
> >
> > This looks wrong. Shouldn't it be:
> >
> > return ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> > (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & (1 << select_idx)));
> >
> >> + else
> >> + return (vcpu_sys_reg(vcpu, c9_PMCR) & ARMV8_PMCR_E) &
> >> + (vcpu_sys_reg(vcpu, c9_PMCNTENSET) >> select_idx);
> >> +}
> >
> > Also, I don't really see why we need to check the 32bit version, which
> > has the exact same content.
> >
> >> +
> >> +static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
> >> +{
> >> + struct kvm_pmu *pmu;
> >> + struct kvm_vcpu_arch *vcpu_arch;
> >> +
> >> + pmc -= pmc->idx;
> >> + pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
> >> + vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
> >> + return container_of(vcpu_arch, struct kvm_vcpu, arch);
> >> +}
> >> +
> >> +/**
> >> + * kvm_pmu_stop_counter - stop PMU counter
> >> + * @pmc: The PMU counter pointer
> >> + *
> >> + * If this counter has been configured to monitor some event, release it here.
> >> + */
> >> +static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
> >> +{
> >> + struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
> >> + u64 counter;
> >> +
> >> + if (pmc->perf_event) {
> >> + counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> >> + if (!vcpu_mode_is_32bit(vcpu))
> >> + vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
> >> + else
> >> + vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
> >
> > Same thing - we don't need to make a difference between 32 and 64bit.
> >
> So it's fine to drop all the vcpu_mode_is_32bit(vcpu) check of this
> series? The only one we should take care is the PMCCNTR, right?
Yes, mostly. As long as you only reason on the 64bit register set,
you're pretty safe, and that in turn solves all kind of ugly endianness
issues.
> >> +
> >> + perf_event_release_kernel(pmc->perf_event);
> >> + pmc->perf_event = NULL;
> >> + }
> >> +}
> >> +
> >> +/**
> >> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> >> + * @vcpu: The vcpu pointer
> >> + * @data: The data guest writes to PMXEVTYPER_EL0
> >> + * @select_idx: The number of selected counter
> >> + *
> >> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> >> + * event with given hardware event number. Here we call perf_event API to
> >> + * emulate this action and create a kernel perf event for it.
> >> + */
> >> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> >> + u32 select_idx)
> >> +{
> >> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >> + struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >> + struct perf_event *event;
> >> + struct perf_event_attr attr;
> >> + u32 eventsel;
> >> + u64 counter;
> >> +
> >> + kvm_pmu_stop_counter(pmc);
> >
> > Wait. I didn't realize this before, but you have the vcpu right here.
> > Why don't you pass it as a parameter to kvm_pmu_stop_counter and avoid
> > the kvm_pmc_to_vcpu thing altogether?
> >
> Yeah, we could pass vcpu as a parameter for this function. But the
> kvm_pmc_to_vcpu helper is also used in kvm_pmu_perf_overflow() and
> within kvm_pmu_perf_overflow it needs the pmc->idx, we couldn't pass
> vcpu as a parameter, so this helper is necessary for kvm_pmu_perf_overflow.
OK. Then keep the helper with kvm_pmu_perf_overflow, and pass the the
vcpu as a parameter to the leaf functions.
Thanks,
M.
--
Jazz is not dead. It just smells funny.
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 12/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register
2015-12-08 16:42 ` Marc Zyngier
@ 2015-12-09 8:35 ` Shannon Zhao
2015-12-09 8:56 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-09 8:35 UTC (permalink / raw)
To: linux-arm-kernel
On 2015/12/9 0:42, Marc Zyngier wrote:
>> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable)
>> > +{
>> > + int i;
>> > + struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> > + struct kvm_pmc *pmc;
>> > +
>> > + if (!all_enable)
>> > + return;
> You have the vcpu. Can you move the check for PMCR_EL0.E here instead of
> having it in both of the callers?
>
But it still needs to check PMCR_EL0.E in kvm_pmu_handle_pmcr(). When
PMCR_EL0.E == 1, it calls kvm_pmu_enable_counter(), otherwise it calls
kvm_pmu_disable_counter(). So as it checks already, just pass the result
as a parameter.
>> > +
>> > + for_each_set_bit(i, (const unsigned long *)&val, ARMV8_MAX_COUNTERS) {
> Nonononono... If you must have to use a long, use a long. Don't cast it
> to a different type (hint: big endian).
>
>> > + pmc = &pmu->pmc[i];
>> > + if (pmc->perf_event) {
>> > + perf_event_enable(pmc->perf_event);
>> > + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
>> > + kvm_debug("fail to enable perf event\n");
>> > + }
>> > + }
>> > +}
>> > +
>> > +/**
>> > + * kvm_pmu_disable_counter - disable selected PMU counter
>> > + * @vcpu: The vcpu pointer
>> > + * @val: the value guest writes to PMCNTENCLR register
>> > + *
>> > + * Call perf_event_disable to stop counting the perf event
>> > + */
>> > +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
>> > +{
>> > + int i;
>> > + struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> > + struct kvm_pmc *pmc;
>> > +
> Why are enable and disable asymmetric (handling of PMCR.E)?
>
To enable a counter, it needs both the PMCR_EL0.E and the corresponding
bit of PMCNTENSET_EL0 set to 1. But to disable a counter, it only needs
one of them and when PMCR_EL0.E == 0? it disables all the counters.
Thanks,
--
Shannon
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 14/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register
2015-12-08 16:59 ` Marc Zyngier
@ 2015-12-09 8:47 ` Shannon Zhao
0 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-09 8:47 UTC (permalink / raw)
To: linux-arm-kernel
On 2015/12/9 0:59, Marc Zyngier wrote:
>> > + }
>> > +
>> > + /* If all overflow bits are cleared, kick the vcpu to clear interrupt
>> > + * pending status.
>> > + */
>> > + if (val == 0)
>> > + kvm_vcpu_kick(vcpu);
> Do we really need to do so? This will be dropped on the next entry
> anyway, so i don't see the need to kick the vcpu again. Or am I missing
> something?
>
I thought maybe it could not set the interrupt low immediately when all
the overflow bits are cleared for some reason, so add a kick to force it
to sync the interrupt. But as you said, I'll remove this.
Thanks,
--
Shannon
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 12/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register
2015-12-09 8:35 ` Shannon Zhao
@ 2015-12-09 8:56 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-09 8:56 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 9 Dec 2015 16:35:58 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>
>
> On 2015/12/9 0:42, Marc Zyngier wrote:
> >> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable)
> >> > +{
> >> > + int i;
> >> > + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >> > + struct kvm_pmc *pmc;
> >> > +
> >> > + if (!all_enable)
> >> > + return;
> > You have the vcpu. Can you move the check for PMCR_EL0.E here instead of
> > having it in both of the callers?
> >
> But it still needs to check PMCR_EL0.E in kvm_pmu_handle_pmcr(). When
> PMCR_EL0.E == 1, it calls kvm_pmu_enable_counter(), otherwise it calls
> kvm_pmu_disable_counter(). So as it checks already, just pass the result
> as a parameter.
I've seen that, but it makes the code look ugly. At any rate, you might
as well not call enable_counter if PMCR.E==0. But splitting the lookup
of the bit and the test like you do is not nice at all. Making it
self-contained looks a lot better, and you don't have to think about
the caller.
> >> > +
> >> > + for_each_set_bit(i, (const unsigned long *)&val, ARMV8_MAX_COUNTERS) {
> > Nonononono... If you must have to use a long, use a long. Don't cast it
> > to a different type (hint: big endian).
> >
> >> > + pmc = &pmu->pmc[i];
> >> > + if (pmc->perf_event) {
> >> > + perf_event_enable(pmc->perf_event);
> >> > + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> >> > + kvm_debug("fail to enable perf event\n");
> >> > + }
> >> > + }
> >> > +}
> >> > +
> >> > +/**
> >> > + * kvm_pmu_disable_counter - disable selected PMU counter
> >> > + * @vcpu: The vcpu pointer
> >> > + * @val: the value guest writes to PMCNTENCLR register
> >> > + *
> >> > + * Call perf_event_disable to stop counting the perf event
> >> > + */
> >> > +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
> >> > +{
> >> > + int i;
> >> > + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >> > + struct kvm_pmc *pmc;
> >> > +
> > Why are enable and disable asymmetric (handling of PMCR.E)?
> >
> To enable a counter, it needs both the PMCR_EL0.E and the corresponding
> bit of PMCNTENSET_EL0 set to 1. But to disable a counter, it only needs
> one of them and when PMCR_EL0.E == 0? it disables all the counters.
OK.
M.
--
Jazz is not dead. It just smells funny.
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 15/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register
2015-12-08 17:03 ` Marc Zyngier
@ 2015-12-09 9:18 ` Shannon Zhao
2015-12-09 9:49 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-09 9:18 UTC (permalink / raw)
To: linux-arm-kernel
On 2015/12/9 1:03, Marc Zyngier wrote:
> On 08/12/15 12:47, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> >
>> > The reset value of PMUSERENR_EL0 is UNKNOWN, use reset_unknown.
>> >
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> > arch/arm64/kvm/sys_regs.c | 5 +++--
>> > 1 file changed, 3 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index c830fde..80b66c0 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -880,7 +880,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>> > access_pmu_pmxevcntr },
>> > /* PMUSERENR_EL0 */
>> > { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
>> > - trap_raz_wi },
>> > + access_pmu_regs, reset_unknown, PMUSERENR_EL0 },
> So while the 64bit view of the register resets as unknown, a CPU
> resetting in 32bit mode resets as 0. I suggest you reset it as zero, and
> document that choice. You may have to revisit all the other registers
> that do reset as unknown for 64bit as well.
>
Sure.
BTW, here I didn't handle the bits of PMUSERENR which are used to
permit/forbid accessing some PMU registers from EL0. Does it need to add
the handler? Is there any way to get the exceptional level of the
accessing in hypervisor?
Thanks,
--
Shannon
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 15/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register
2015-12-09 9:18 ` Shannon Zhao
@ 2015-12-09 9:49 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2015-12-09 9:49 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 9 Dec 2015 17:18:02 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>
>
> On 2015/12/9 1:03, Marc Zyngier wrote:
> > On 08/12/15 12:47, Shannon Zhao wrote:
> >> > From: Shannon Zhao <shannon.zhao@linaro.org>
> >> >
> >> > The reset value of PMUSERENR_EL0 is UNKNOWN, use reset_unknown.
> >> >
> >> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >> > ---
> >> > arch/arm64/kvm/sys_regs.c | 5 +++--
> >> > 1 file changed, 3 insertions(+), 2 deletions(-)
> >> >
> >> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >> > index c830fde..80b66c0 100644
> >> > --- a/arch/arm64/kvm/sys_regs.c
> >> > +++ b/arch/arm64/kvm/sys_regs.c
> >> > @@ -880,7 +880,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >> > access_pmu_pmxevcntr },
> >> > /* PMUSERENR_EL0 */
> >> > { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
> >> > - trap_raz_wi },
> >> > + access_pmu_regs, reset_unknown, PMUSERENR_EL0 },
> > So while the 64bit view of the register resets as unknown, a CPU
> > resetting in 32bit mode resets as 0. I suggest you reset it as zero, and
> > document that choice. You may have to revisit all the other registers
> > that do reset as unknown for 64bit as well.
> >
> Sure.
>
> BTW, here I didn't handle the bits of PMUSERENR which are used to
> permit/forbid accessing some PMU registers from EL0. Does it need to add
> the handler? Is there any way to get the exceptional level of the
> accessing in hypervisor?
Ah, good point, I missed that. Yes, we need to be able to handle that.
To find out, you can use vcpu_mode_priv(), which returns true if the
CPU was in a high privilege mode (EL1 for 64bit, anything higher than
USR on 32bit), and false otherwise. So far, the only user is
arch/arm/kvm/perf.c.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register
2015-12-08 16:30 ` Marc Zyngier
@ 2015-12-10 11:36 ` Shannon Zhao
2015-12-10 12:07 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Shannon Zhao @ 2015-12-10 11:36 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 2015/12/9 0:30, Marc Zyngier wrote:
> On 08/12/15 12:47, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> >
>> > Since the reset value of PMEVCNTRn or PMCCNTR is UNKNOWN, use
>> > reset_unknown for its reset handler. Add access handler which emulates
>> > writing and reading PMEVCNTRn or PMCCNTR register. When reading
>> > PMEVCNTRn or PMCCNTR, call perf_event_read_value to get the count value
>> > of the perf event.
>> >
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> > arch/arm64/kvm/sys_regs.c | 107 +++++++++++++++++++++++++++++++++++++++++++++-
>> > 1 file changed, 105 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index c116a1b..f7a73b5 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -525,6 +525,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>> >
>> > if (p->is_write) {
>> > switch (r->reg) {
>> > + case PMEVCNTR0_EL0 ... PMCCNTR_EL0: {
> Same problem as previously mentioned.
>
>> > + val = kvm_pmu_get_counter_value(vcpu,
>> > + r->reg - PMEVCNTR0_EL0);
>> > + vcpu_sys_reg(vcpu, r->reg) += (s64)p->regval - val;
>> > + break;
>> > + }
If I use a handler to handle these accesses to PMEVCNTRn and PMCCNTR
like below. It converts the register offset c14_PMEVCNTRn and c9_PMCCNTR
to PMEVCNTRn_EL0 and PMCCNTR_EL0, uniformly uses vcpu_sys_reg and
doesn't need to take care the big endian. What do you think about this?
static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
u64 idx, reg, val;
if (p->is_aarch32)
reg = r->reg / 2;
else
reg = r->reg;
switch (reg) {
case PMEVCNTR0_EL0 ... PMEVCNTR30_EL0: {
idx = reg - PMEVCNTR0_EL0;
break;
}
case PMCCNTR_EL0: {
idx = ARMV8_CYCLE_IDX;
break;
}
default:
idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
if (!pmu_counter_idx_valid(vcpu, idx))
return true;
reg = (idx == ARMV8_CYCLE_IDX) ? PMCCNTR_EL0 :
PMEVCNTR0_EL0 + idx;
break;
}
val = kvm_pmu_get_counter_value(vcpu, idx);
if (p->is_write)
vcpu_sys_reg(vcpu, reg) = (s64)p->regval - val;
else
p->regval = val;
return true;
}
Thanks,
--
Shannon
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register
2015-12-10 11:36 ` Shannon Zhao
@ 2015-12-10 12:07 ` Marc Zyngier
2015-12-10 13:23 ` Shannon Zhao
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2015-12-10 12:07 UTC (permalink / raw)
To: linux-arm-kernel
Hi Shannon,
On 10/12/15 11:36, Shannon Zhao wrote:
> Hi Marc,
>
> On 2015/12/9 0:30, Marc Zyngier wrote:
>> On 08/12/15 12:47, Shannon Zhao wrote:
>>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>>
>>>> Since the reset value of PMEVCNTRn or PMCCNTR is UNKNOWN, use
>>>> reset_unknown for its reset handler. Add access handler which emulates
>>>> writing and reading PMEVCNTRn or PMCCNTR register. When reading
>>>> PMEVCNTRn or PMCCNTR, call perf_event_read_value to get the count value
>>>> of the perf event.
>>>>
>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> ---
>>>> arch/arm64/kvm/sys_regs.c | 107 +++++++++++++++++++++++++++++++++++++++++++++-
>>>> 1 file changed, 105 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>>> index c116a1b..f7a73b5 100644
>>>> --- a/arch/arm64/kvm/sys_regs.c
>>>> +++ b/arch/arm64/kvm/sys_regs.c
>>>> @@ -525,6 +525,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>>>>
>>>> if (p->is_write) {
>>>> switch (r->reg) {
>>>> + case PMEVCNTR0_EL0 ... PMCCNTR_EL0: {
>> Same problem as previously mentioned.
>>
>>>> + val = kvm_pmu_get_counter_value(vcpu,
>>>> + r->reg - PMEVCNTR0_EL0);
>>>> + vcpu_sys_reg(vcpu, r->reg) += (s64)p->regval - val;
>>>> + break;
>>>> + }
>
> If I use a handler to handle these accesses to PMEVCNTRn and PMCCNTR
> like below. It converts the register offset c14_PMEVCNTRn and c9_PMCCNTR
> to PMEVCNTRn_EL0 and PMCCNTR_EL0, uniformly uses vcpu_sys_reg and
> doesn't need to take care the big endian. What do you think about this?
>
> static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
> struct sys_reg_params *p,
> const struct sys_reg_desc *r)
> {
> u64 idx, reg, val;
>
> if (p->is_aarch32)
> reg = r->reg / 2;
I'd prefer it if you actually decoded the reg itself. Something like:
if (p->is_aarch32) {
if (r->CRn == 9 && r->CRm == 13)
reg = (r->Op2 & 1) ? 0 : PMCCNTR_EL0;
if (r->CRn == 14 && (r->CRm & 0xc) == 8) {
reg = ((r->CRm & 3) << 2) & (r->Op2 & 7);
reg += PMEVCNTR0_EL0;
} else {
BUG();
}
} else {
....
}
And then you can get rid of the c14_PMVCNTR* and c9_PMCCNTR macros.
The only slightly ugly thing is this 0 value to represent PMXEVTYPER,
but that's what we already have with your "default" clause below.
> else
> reg = r->reg;
>
> switch (reg) {
> case PMEVCNTR0_EL0 ... PMEVCNTR30_EL0: {
> idx = reg - PMEVCNTR0_EL0;
> break;
> }
> case PMCCNTR_EL0: {
> idx = ARMV8_CYCLE_IDX;
> break;
> }
> default:
> idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> if (!pmu_counter_idx_valid(vcpu, idx))
> return true;
> reg = (idx == ARMV8_CYCLE_IDX) ? PMCCNTR_EL0 :
> PMEVCNTR0_EL0 + idx;
> break;
> }
>
> val = kvm_pmu_get_counter_value(vcpu, idx);
> if (p->is_write)
> vcpu_sys_reg(vcpu, reg) = (s64)p->regval - val;
Maybe I don't have my head screwed in the right way, but as long as
we're only using u64 quantities, why do we need this s64 cast?
> else
> p->regval = val;
>
> return true;
> }
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register
2015-12-10 12:07 ` Marc Zyngier
@ 2015-12-10 13:23 ` Shannon Zhao
0 siblings, 0 replies; 48+ messages in thread
From: Shannon Zhao @ 2015-12-10 13:23 UTC (permalink / raw)
To: linux-arm-kernel
On 2015/12/10 20:07, Marc Zyngier wrote:
> Hi Shannon,
>
> On 10/12/15 11:36, Shannon Zhao wrote:
>> > Hi Marc,
>> >
>> > On 2015/12/9 0:30, Marc Zyngier wrote:
>>> >> On 08/12/15 12:47, Shannon Zhao wrote:
>>>>> >>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>>> >>>>
>>>>> >>>> Since the reset value of PMEVCNTRn or PMCCNTR is UNKNOWN, use
>>>>> >>>> reset_unknown for its reset handler. Add access handler which emulates
>>>>> >>>> writing and reading PMEVCNTRn or PMCCNTR register. When reading
>>>>> >>>> PMEVCNTRn or PMCCNTR, call perf_event_read_value to get the count value
>>>>> >>>> of the perf event.
>>>>> >>>>
>>>>> >>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>>> >>>> ---
>>>>> >>>> arch/arm64/kvm/sys_regs.c | 107 +++++++++++++++++++++++++++++++++++++++++++++-
>>>>> >>>> 1 file changed, 105 insertions(+), 2 deletions(-)
>>>>> >>>>
>>>>> >>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>>>> >>>> index c116a1b..f7a73b5 100644
>>>>> >>>> --- a/arch/arm64/kvm/sys_regs.c
>>>>> >>>> +++ b/arch/arm64/kvm/sys_regs.c
>>>>> >>>> @@ -525,6 +525,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>>>>> >>>>
>>>>> >>>> if (p->is_write) {
>>>>> >>>> switch (r->reg) {
>>>>> >>>> + case PMEVCNTR0_EL0 ... PMCCNTR_EL0: {
>>> >> Same problem as previously mentioned.
>>> >>
>>>>> >>>> + val = kvm_pmu_get_counter_value(vcpu,
>>>>> >>>> + r->reg - PMEVCNTR0_EL0);
>>>>> >>>> + vcpu_sys_reg(vcpu, r->reg) += (s64)p->regval - val;
>>>>> >>>> + break;
>>>>> >>>> + }
>> >
>> > If I use a handler to handle these accesses to PMEVCNTRn and PMCCNTR
>> > like below. It converts the register offset c14_PMEVCNTRn and c9_PMCCNTR
>> > to PMEVCNTRn_EL0 and PMCCNTR_EL0, uniformly uses vcpu_sys_reg and
>> > doesn't need to take care the big endian. What do you think about this?
>> >
>> > static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>> > struct sys_reg_params *p,
>> > const struct sys_reg_desc *r)
>> > {
>> > u64 idx, reg, val;
>> >
>> > if (p->is_aarch32)
>> > reg = r->reg / 2;
> I'd prefer it if you actually decoded the reg itself. Something like:
>
> if (p->is_aarch32) {
> if (r->CRn == 9 && r->CRm == 13)
> reg = (r->Op2 & 1) ? 0 : PMCCNTR_EL0;
> if (r->CRn == 14 && (r->CRm & 0xc) == 8) {
> reg = ((r->CRm & 3) << 2) & (r->Op2 & 7);
> reg += PMEVCNTR0_EL0;
> } else {
> BUG();
> }
> } else {
> ....
> }
>
> And then you can get rid of the c14_PMVCNTR* and c9_PMCCNTR macros.
> The only slightly ugly thing is this 0 value to represent PMXEVTYPER,
> but that's what we already have with your "default" clause below.
>
Ok, thanks.
>> > else
>> > reg = r->reg;
>> >
>> > switch (reg) {
>> > case PMEVCNTR0_EL0 ... PMEVCNTR30_EL0: {
>> > idx = reg - PMEVCNTR0_EL0;
>> > break;
>> > }
>> > case PMCCNTR_EL0: {
>> > idx = ARMV8_CYCLE_IDX;
>> > break;
>> > }
>> > default:
>> > idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
>> > if (!pmu_counter_idx_valid(vcpu, idx))
>> > return true;
>> > reg = (idx == ARMV8_CYCLE_IDX) ? PMCCNTR_EL0 :
>> > PMEVCNTR0_EL0 + idx;
>> > break;
>> > }
>> >
>> > val = kvm_pmu_get_counter_value(vcpu, idx);
>> > if (p->is_write)
>> > vcpu_sys_reg(vcpu, reg) = (s64)p->regval - val;
> Maybe I don't have my head screwed in the right way, but as long as
> we're only using u64 quantities, why do we need this s64 cast?
>
In case p->regval less than val.
For example, at the first time it writes 0x80000001 to
vcpu_sys_reg(vcpu, reg) and start a perf event which counts a value
10000. Then we want to start the same perf event with writing 0x80000001
to vcpu_sys_reg(vcpu, reg) too. At this time, p->regval(0x80000001) is
less than val(0x80000001 + 10000).
Thanks,
--
Shannon
^ permalink raw reply [flat|nested] 48+ messages in thread
end of thread, other threads:[~2015-12-10 13:23 UTC | newest]
Thread overview: 48+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-12-08 12:47 [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 01/21] ARM64: Move PMU register related defines to asm/pmu.h Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 02/21] KVM: ARM64: Define PMU data structure for each vcpu Shannon Zhao
2015-12-08 13:37 ` Marc Zyngier
2015-12-08 13:53 ` Will Deacon
2015-12-08 14:10 ` Marc Zyngier
2015-12-08 14:14 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 03/21] KVM: ARM64: Add offset defines for PMU registers Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register Shannon Zhao
2015-12-08 14:23 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Shannon Zhao
2015-12-08 15:43 ` Marc Zyngier
2015-12-09 7:38 ` Shannon Zhao
2015-12-09 8:23 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 08/21] KVM: ARM64: Add access handler for PMEVTYPERn and PMCCFILTR register Shannon Zhao
2015-12-08 16:17 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 09/21] KVM: ARM64: Add access handler for PMXEVTYPER register Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 10/21] KVM: ARM64: Add access handler for PMEVCNTRn and PMCCNTR register Shannon Zhao
2015-12-08 16:30 ` Marc Zyngier
2015-12-10 11:36 ` Shannon Zhao
2015-12-10 12:07 ` Marc Zyngier
2015-12-10 13:23 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 11/21] KVM: ARM64: Add access handler for PMXEVCNTR register Shannon Zhao
2015-12-08 16:33 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 12/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register Shannon Zhao
2015-12-08 16:42 ` Marc Zyngier
2015-12-09 8:35 ` Shannon Zhao
2015-12-09 8:56 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 13/21] KVM: ARM64: Add reset and access handlers for PMINTENSET and PMINTENCLR register Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 14/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register Shannon Zhao
2015-12-08 16:59 ` Marc Zyngier
2015-12-09 8:47 ` Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 15/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register Shannon Zhao
2015-12-08 17:03 ` Marc Zyngier
2015-12-09 9:18 ` Shannon Zhao
2015-12-09 9:49 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 16/21] KVM: ARM64: Add reset and access handlers for PMSWINC register Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 17/21] KVM: ARM64: Add helper to handle PMCR register bits Shannon Zhao
2015-12-08 17:36 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 18/21] KVM: ARM64: Add PMU overflow interrupt routing Shannon Zhao
2015-12-08 17:37 ` Marc Zyngier
2015-12-08 12:47 ` [PATCH v6 19/21] KVM: ARM64: Reset PMU state when resetting vcpu Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 20/21] KVM: ARM64: Free perf event of PMU when destroying vcpu Shannon Zhao
2015-12-08 12:47 ` [PATCH v6 21/21] KVM: ARM64: Add a new kvm ARM PMU device Shannon Zhao
2015-12-08 17:43 ` Marc Zyngier
2015-12-08 17:56 ` [PATCH v6 00/21] KVM: ARM64: Add guest PMU support Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).