linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 1/5] x86: perf: Move RDPMC event flag to a common definition
       [not found] <20210914204800.3945732-1-robh@kernel.org>
@ 2021-09-14 20:47 ` Rob Herring
  2021-10-13 17:25   ` Mark Rutland
  2021-09-14 20:47 ` [PATCH v10 3/5] arm64: perf: Add userspace counter access disable switch Rob Herring
  2021-09-14 20:47 ` [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring
  2 siblings, 1 reply; 9+ messages in thread
From: Rob Herring @ 2021-09-14 20:47 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland, Peter Zijlstra, Ingo Molnar
  Cc: Catalin Marinas, Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang,
	Ian Rogers, Alexander Shishkin, honnappa.nagarahalli,
	Zachary.Leaf, Raphael Gault, Jonathan Cameron, Namhyung Kim,
	Itaru Kitayama, Vince Weaver, linux-arm-kernel, linux-kernel,
	Thomas Gleixner, Borislav Petkov, x86, H. Peter Anvin,
	linux-perf-users

In preparation to enable user counter access on arm64 and to move some
of the user access handling to perf core, create a common event flag for
user counter access and convert x86 to use it.

Since the architecture specific flags start at the LSB, starting at the
MSB for common flags.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>
---
 arch/x86/events/core.c       | 10 +++++-----
 arch/x86/events/perf_event.h |  2 +-
 include/linux/perf_event.h   |  2 ++
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 2a57dbed4894..2bd50fc061e1 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2469,7 +2469,7 @@ static int x86_pmu_event_init(struct perf_event *event)
 
 	if (READ_ONCE(x86_pmu.attr_rdpmc) &&
 	    !(event->hw.flags & PERF_X86_EVENT_LARGE_PEBS))
-		event->hw.flags |= PERF_X86_EVENT_RDPMC_ALLOWED;
+		event->hw.flags |= PERF_EVENT_FLAG_USER_READ_CNT;
 
 	return err;
 }
@@ -2503,7 +2503,7 @@ void perf_clear_dirty_counters(void)
 
 static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
 {
-	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
+	if (!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
 		return;
 
 	/*
@@ -2524,7 +2524,7 @@ static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
 
 static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)
 {
-	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
+	if (!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
 		return;
 
 	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
@@ -2535,7 +2535,7 @@ static int x86_pmu_event_idx(struct perf_event *event)
 {
 	struct hw_perf_event *hwc = &event->hw;
 
-	if (!(hwc->flags & PERF_X86_EVENT_RDPMC_ALLOWED))
+	if (!(hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT))
 		return 0;
 
 	if (is_metric_idx(hwc->idx))
@@ -2718,7 +2718,7 @@ void arch_perf_update_userpage(struct perf_event *event,
 	userpg->cap_user_time = 0;
 	userpg->cap_user_time_zero = 0;
 	userpg->cap_user_rdpmc =
-		!!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED);
+		!!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT);
 	userpg->pmc_width = x86_pmu.cntval_bits;
 
 	if (!using_native_sched_clock() || !sched_clock_stable())
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index e3ac05c97b5e..49f68b15745f 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -73,7 +73,7 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
 #define PERF_X86_EVENT_PEBS_NA_HSW	0x0010 /* haswell style datala, unknown */
 #define PERF_X86_EVENT_EXCL		0x0020 /* HT exclusivity on counter */
 #define PERF_X86_EVENT_DYNAMIC		0x0040 /* dynamic alloc'd constraint */
-#define PERF_X86_EVENT_RDPMC_ALLOWED	0x0080 /* grant rdpmc permission */
+
 #define PERF_X86_EVENT_EXCL_ACCT	0x0100 /* accounted EXCL event */
 #define PERF_X86_EVENT_AUTO_RELOAD	0x0200 /* use PEBS auto-reload */
 #define PERF_X86_EVENT_LARGE_PEBS	0x0400 /* use large PEBS */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index fe156a8170aa..12debf008d39 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -142,6 +142,8 @@ struct hw_perf_event {
 			int		event_base_rdpmc;
 			int		idx;
 			int		last_cpu;
+
+#define PERF_EVENT_FLAG_USER_READ_CNT	0x80000000
 			int		flags;
 
 			struct hw_perf_event_extra extra_reg;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v10 3/5] arm64: perf: Add userspace counter access disable switch
       [not found] <20210914204800.3945732-1-robh@kernel.org>
  2021-09-14 20:47 ` [PATCH v10 1/5] x86: perf: Move RDPMC event flag to a common definition Rob Herring
@ 2021-09-14 20:47 ` Rob Herring
  2021-10-13 17:30   ` Mark Rutland
  2021-09-14 20:47 ` [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring
  2 siblings, 1 reply; 9+ messages in thread
From: Rob Herring @ 2021-09-14 20:47 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland, Peter Zijlstra, Ingo Molnar
  Cc: Catalin Marinas, Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang,
	Ian Rogers, Alexander Shishkin, honnappa.nagarahalli,
	Zachary.Leaf, Raphael Gault, Jonathan Cameron, Namhyung Kim,
	Itaru Kitayama, Vince Weaver, linux-arm-kernel, linux-kernel,
	linux-perf-users

Like x86, some users may want to disable userspace PMU counter
altogether. Add a sysctl 'perf_user_access' file to control userspace
counter access. The default is '0' which is disabled. Writing '1'
enables access.

Note that x86 also supports writing '2' to globally enable user access.
As there's not existing userspace support to worry about, this shouldn't
be necessary for Arm. It could be added later if the need arises.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-perf-users@vger.kernel.org
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Rob Herring <robh@kernel.org>
---
v10:
 - Add documentation
 - Use a custom handler (needed on the next patch)
v9:
 - Use sysctl instead of sysfs attr
 - Default to disabled
v8:
 - New patch
---
 Documentation/admin-guide/sysctl/kernel.rst | 11 +++++++++
 arch/arm64/kernel/perf_event.c              | 27 +++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index 426162009ce9..346a0dba5703 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -905,6 +905,17 @@ enabled, otherwise writing to this file will return ``-EBUSY``.
 The default value is 8.
 
 
+perf_user_access (arm64 only)
+=================================
+
+Controls user space access for reading perf event counters. When set to 1,
+user space can read performance monitor counter registers directly.
+
+The default value is 0 (access disabled).
+
+See Documentation/arm64/perf.rst for more information.
+
+
 pid_max
 =======
 
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index b4044469527e..a8f8dd741aeb 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -286,6 +286,8 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
 PMU_FORMAT_ATTR(event, "config:0-15");
 PMU_FORMAT_ATTR(long, "config1:0");
 
+static int sysctl_perf_user_access __read_mostly;
+
 static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
 {
 	return event->attr.config1 & 0x1;
@@ -1104,6 +1106,29 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
 	return probe.present ? 0 : -ENODEV;
 }
 
+int armv8pmu_proc_user_access_handler(struct ctl_table *table, int write,
+                void *buffer, size_t *lenp, loff_t *ppos)
+{
+	int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+	if (ret || !write || sysctl_perf_user_access)
+		return ret;
+
+	return 0;
+}
+
+static struct ctl_table armv8_pmu_sysctl_table[] = {
+	{
+		.procname       = "perf_user_access",
+		.data		= &sysctl_perf_user_access,
+		.maxlen		= sizeof(unsigned int),
+		.mode           = 0644,
+		.proc_handler	= armv8pmu_proc_user_access_handler,
+		.extra1		= SYSCTL_ZERO,
+		.extra2		= SYSCTL_ONE,
+	},
+	{ }
+};
+
 static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
 			  int (*map_event)(struct perf_event *event),
 			  const struct attribute_group *events,
@@ -1136,6 +1161,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_CAPS] = caps ?
 			caps : &armv8_pmuv3_caps_attr_group;
 
+	register_sysctl("kernel", armv8_pmu_sysctl_table);
+
 	return 0;
 }
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event
       [not found] <20210914204800.3945732-1-robh@kernel.org>
  2021-09-14 20:47 ` [PATCH v10 1/5] x86: perf: Move RDPMC event flag to a common definition Rob Herring
  2021-09-14 20:47 ` [PATCH v10 3/5] arm64: perf: Add userspace counter access disable switch Rob Herring
@ 2021-09-14 20:47 ` Rob Herring
  2021-10-14 16:58   ` Mark Rutland
  2 siblings, 1 reply; 9+ messages in thread
From: Rob Herring @ 2021-09-14 20:47 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland, Peter Zijlstra, Ingo Molnar
  Cc: Catalin Marinas, Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang,
	Ian Rogers, Alexander Shishkin, honnappa.nagarahalli,
	Zachary.Leaf, Raphael Gault, Jonathan Cameron, Namhyung Kim,
	Itaru Kitayama, Vince Weaver, linux-arm-kernel, linux-kernel,
	linux-perf-users

Arm PMUs can support direct userspace access of counters which allows for
low overhead (i.e. no syscall) self-monitoring of tasks. The same feature
exists on x86 called 'rdpmc'. Unlike x86, userspace access will only be
enabled for thread bound events. This could be extended if needed, but
simplifies the implementation and reduces the chances for any
information leaks (which the x86 implementation suffers from).

PMU EL0 access will be enabled when an event with userspace access is
part of the thread's context. This includes when the event is not
scheduled on the PMU. There's some additional overhead clearing
dirty counters when access is enabled in order to prevent leaking
disabled counter data from other tasks.

Unlike x86, enabling of userspace access must be requested with a new
attr bit: config1:1. If the user requests userspace access and 64-bit
counters, then chaining will be disabled and the user will get the
maximum size counter the underlying h/w can support. The modes for
config1 are as follows:

config1 = 0 : user access disabled and always 32-bit
config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
config1 = 2 : user access enabled and always 32-bit
config1 = 3 : user access enabled and counter size matches underlying counter.

Based on work by Raphael Gault <raphael.gault@arm.com>, but has been
completely re-written.

Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>

---
v10:
 - Don't control enabling user access based on mmap(). Changing the
   event_(un)mapped to run on the event's cpu doesn't work for x86.
   Triggering on mmap() doesn't limit access in any way and complicates
   the implementation.
 - Drop dirty counter tracking and just clear all unused counters.
 - Make the sysctl immediately disable access via IPI.
 - Merge armv8pmu_event_is_chained() and armv8pmu_event_can_chain()

v9:
 - Enabling/disabling of user access is now controlled in .start() and
   mmap hooks which are now called on CPUs that the event is on.
   Depends on rework of perf core and x86 RDPMC code posted here:
   https://lore.kernel.org/lkml/20210728230230.1911468-1-robh@kernel.org/

v8:
 - Rework user access tracking and enabling to be done on task
   context changes using sched_task() hook. This avoids the need for any
   IPIs, mm_switch hooks or undef instr handler.
 - Only support user access when explicitly requested on open and
   only for a thread bound events. This avoids some of the information
   leaks x86 has and simplifies the implementation.

v7:
 - Clear disabled counters when user access is enabled for a task to
   avoid leaking other tasks counter data.
 - Rework context switch handling utilizing sched_task callback
 - Add armv8pmu_event_can_chain() helper
 - Rework config1 flags handling structure
 - Use ARMV8_IDX_CYCLE_COUNTER_USER define for remapped user cycle
   counter index

v6:
 - Add new attr.config1 rdpmc bit for userspace to hint it wants
   userspace access when also requesting 64-bit counters.

v5:
 - Only set cap_user_rdpmc if event is on current cpu
 - Limit enabling/disabling access to CPUs associated with the PMU
   (supported_cpus) and with the mm_struct matching current->active_mm.

v2:
 - Move mapped/unmapped into arm64 code. Fixes arm32.
 - Rebase on cap_user_time_short changes

Changes from Raphael's v4:
  - Drop homogeneous check
  - Disable access for chained counters
  - Set pmc_width in user page
---
 arch/arm64/kernel/perf_event.c | 99 +++++++++++++++++++++++++++++++---
 1 file changed, 92 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index a8f8dd741aeb..91af3d1c254b 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -285,6 +285,7 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
 
 PMU_FORMAT_ATTR(event, "config:0-15");
 PMU_FORMAT_ATTR(long, "config1:0");
+PMU_FORMAT_ATTR(rdpmc, "config1:1");
 
 static int sysctl_perf_user_access __read_mostly;
 
@@ -293,9 +294,15 @@ static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
 	return event->attr.config1 & 0x1;
 }
 
+static inline bool armv8pmu_event_want_user_access(struct perf_event *event)
+{
+	return event->attr.config1 & 0x2;
+}
+
 static struct attribute *armv8_pmuv3_format_attrs[] = {
 	&format_attr_event.attr,
 	&format_attr_long.attr,
+	&format_attr_rdpmc.attr,
 	NULL,
 };
 
@@ -364,7 +371,7 @@ static const struct attribute_group armv8_pmuv3_caps_attr_group = {
  */
 #define	ARMV8_IDX_CYCLE_COUNTER	0
 #define	ARMV8_IDX_COUNTER0	1
-
+#define	ARMV8_IDX_CYCLE_COUNTER_USER	32
 
 /*
  * We unconditionally enable ARMv8.5-PMU long event counter support
@@ -379,15 +386,14 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
 /*
  * We must chain two programmable counters for 64 bit events,
  * except when we have allocated the 64bit cycle counter (for CPU
- * cycles event). This must be called only when the event has
- * a counter allocated.
+ * cycles event) or when user space counter access is enabled.
  */
 static inline bool armv8pmu_event_is_chained(struct perf_event *event)
 {
 	int idx = event->hw.idx;
 	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
 
-	return !WARN_ON(idx < 0) &&
+	return !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) &&
 	       armv8pmu_event_is_64bit(event) &&
 	       !armv8pmu_has_long_event(cpu_pmu) &&
 	       (idx != ARMV8_IDX_CYCLE_COUNTER);
@@ -720,6 +726,27 @@ static inline u32 armv8pmu_getreset_flags(void)
 	return value;
 }
 
+static void armv8pmu_disable_user_access(void)
+{
+	write_sysreg(0, pmuserenr_el0);
+}
+
+static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu)
+{
+	int i;
+	struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
+
+	/* Clear any unused counters to avoid leaking their contents */
+	for_each_clear_bit(i, cpuc->used_mask, cpu_pmu->num_events) {
+		if (i == ARMV8_IDX_CYCLE_COUNTER)
+			write_sysreg(0, pmccntr_el0);
+		else
+			armv8pmu_write_evcntr(i, 0);
+	}
+
+	write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0);
+}
+
 static void armv8pmu_enable_event(struct perf_event *event)
 {
 	/*
@@ -763,6 +790,14 @@ static void armv8pmu_disable_event(struct perf_event *event)
 
 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
 {
+	struct perf_event_context *task_ctx =
+		this_cpu_ptr(cpu_pmu->pmu.pmu_cpu_context)->task_ctx;
+
+	if (sysctl_perf_user_access && task_ctx && task_ctx->nr_user)
+		armv8pmu_enable_user_access(cpu_pmu);
+	else
+		armv8pmu_disable_user_access();
+
 	/* Enable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
 }
@@ -880,13 +915,16 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
 	if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
 		if (!test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
 			return ARMV8_IDX_CYCLE_COUNTER;
+		else if (armv8pmu_event_is_64bit(event) &&
+			   armv8pmu_event_want_user_access(event) &&
+			   !armv8pmu_has_long_event(cpu_pmu))
+				return -EAGAIN;
 	}
 
 	/*
 	 * Otherwise use events counters
 	 */
-	if (armv8pmu_event_is_64bit(event) &&
-	    !armv8pmu_has_long_event(cpu_pmu))
+	if (armv8pmu_event_is_chained(event))
 		return	armv8pmu_get_chain_idx(cpuc, cpu_pmu);
 	else
 		return armv8pmu_get_single_idx(cpuc, cpu_pmu);
@@ -902,6 +940,23 @@ static void armv8pmu_clear_event_idx(struct pmu_hw_events *cpuc,
 		clear_bit(idx - 1, cpuc->used_mask);
 }
 
+static int armv8pmu_access_event_idx(struct perf_event *event)
+{
+	if (!sysctl_perf_user_access ||
+	    !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
+		return 0;
+
+	/*
+	 * We remap the cycle counter index to 32 to
+	 * match the offset applied to the rest of
+	 * the counter indices.
+	 */
+	if (event->hw.idx == ARMV8_IDX_CYCLE_COUNTER)
+		return ARMV8_IDX_CYCLE_COUNTER_USER;
+
+	return event->hw.idx;
+}
+
 /*
  * Add an event filter to a given event.
  */
@@ -995,9 +1050,23 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
 				       &armv8_pmuv3_perf_cache_map,
 				       ARMV8_PMU_EVTYPE_EVENT);
 
-	if (armv8pmu_event_is_64bit(event))
+	/*
+	 * At this point, the counter is not assigned. If a 64-bit counter is
+	 * requested, we must make sure the h/w has 64-bit counters if we set
+	 * the event size to 64-bit because chaining is not supported with
+	 * userspace access. This may still fail later on if the CPU cycle
+	 * counter is in use.
+	 */
+	if (armv8pmu_event_is_64bit(event) &&
+	    (!armv8pmu_event_want_user_access(event) ||
+	     armv8pmu_has_long_event(armpmu) || (hw_event_id == ARMV8_PMUV3_PERFCTR_CPU_CYCLES)))
 		event->hw.flags |= ARMPMU_EVT_64BIT;
 
+	/* Userspace counter access only enabled if requested and a per task event */
+	if (sysctl_perf_user_access && armv8pmu_event_want_user_access(event) &&
+	    (event->attach_state & PERF_ATTACH_TASK))
+		event->hw.flags |= PERF_EVENT_FLAG_USER_READ_CNT;
+
 	/* Only expose micro/arch events supported by this PMU */
 	if ((hw_event_id > 0) && (hw_event_id < ARMV8_PMUV3_MAX_COMMON_EVENTS)
 	    && test_bit(hw_event_id, armpmu->pmceid_bitmap)) {
@@ -1106,6 +1175,11 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
 	return probe.present ? 0 : -ENODEV;
 }
 
+static void armv8pmu_disable_user_access_ipi(void *unused)
+{
+	armv8pmu_disable_user_access();
+}
+
 int armv8pmu_proc_user_access_handler(struct ctl_table *table, int write,
                 void *buffer, size_t *lenp, loff_t *ppos)
 {
@@ -1113,6 +1187,7 @@ int armv8pmu_proc_user_access_handler(struct ctl_table *table, int write,
 	if (ret || !write || sysctl_perf_user_access)
 		return ret;
 
+	on_each_cpu(armv8pmu_disable_user_access_ipi, NULL, 1);
 	return 0;
 }
 
@@ -1152,6 +1227,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
 	cpu_pmu->set_event_filter	= armv8pmu_set_event_filter;
 	cpu_pmu->filter_match		= armv8pmu_filter_match;
 
+	cpu_pmu->pmu.event_idx		= armv8pmu_access_event_idx;
+
 	cpu_pmu->name			= name;
 	cpu_pmu->map_event		= map_event;
 	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] = events ?
@@ -1328,6 +1405,14 @@ void arch_perf_update_userpage(struct perf_event *event,
 	userpg->cap_user_time = 0;
 	userpg->cap_user_time_zero = 0;
 	userpg->cap_user_time_short = 0;
+	userpg->cap_user_rdpmc = !!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT);
+
+	if (userpg->cap_user_rdpmc) {
+		if (event->hw.flags & ARMPMU_EVT_64BIT)
+			userpg->pmc_width = 64;
+		else
+			userpg->pmc_width = 32;
+	}
 
 	do {
 		rd = sched_clock_read_begin(&seq);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v10 1/5] x86: perf: Move RDPMC event flag to a common definition
  2021-09-14 20:47 ` [PATCH v10 1/5] x86: perf: Move RDPMC event flag to a common definition Rob Herring
@ 2021-10-13 17:25   ` Mark Rutland
  0 siblings, 0 replies; 9+ messages in thread
From: Mark Rutland @ 2021-10-13 17:25 UTC (permalink / raw)
  To: Rob Herring, Peter Zijlstra
  Cc: Will Deacon, Ingo Molnar, Catalin Marinas,
	Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang, Ian Rogers,
	Alexander Shishkin, honnappa.nagarahalli, Zachary.Leaf,
	Raphael Gault, Jonathan Cameron, Namhyung Kim, Itaru Kitayama,
	Vince Weaver, linux-arm-kernel, linux-kernel, Thomas Gleixner,
	Borislav Petkov, x86, H. Peter Anvin, linux-perf-users

Hi Rob,

On Tue, Sep 14, 2021 at 03:47:56PM -0500, Rob Herring wrote:
> In preparation to enable user counter access on arm64 and to move some
> of the user access handling to perf core, create a common event flag for
> user counter access and convert x86 to use it.
> 
> Since the architecture specific flags start at the LSB, starting at the
> MSB for common flags.

Minor comments below (definition rename, and a comment block), but with
those:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Peter, are you happy with this from the x86 side?

> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Kan Liang <kan.liang@linux.intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: x86@kernel.org
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: linux-perf-users@vger.kernel.org
> Signed-off-by: Rob Herring <robh@kernel.org>
> ---
>  arch/x86/events/core.c       | 10 +++++-----
>  arch/x86/events/perf_event.h |  2 +-
>  include/linux/perf_event.h   |  2 ++
>  3 files changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 2a57dbed4894..2bd50fc061e1 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -2469,7 +2469,7 @@ static int x86_pmu_event_init(struct perf_event *event)
>  
>  	if (READ_ONCE(x86_pmu.attr_rdpmc) &&
>  	    !(event->hw.flags & PERF_X86_EVENT_LARGE_PEBS))
> -		event->hw.flags |= PERF_X86_EVENT_RDPMC_ALLOWED;
> +		event->hw.flags |= PERF_EVENT_FLAG_USER_READ_CNT;
>  
>  	return err;
>  }
> @@ -2503,7 +2503,7 @@ void perf_clear_dirty_counters(void)
>  
>  static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
>  {
> -	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
> +	if (!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
>  		return;
>  
>  	/*
> @@ -2524,7 +2524,7 @@ static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
>  
>  static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)
>  {
> -	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
> +	if (!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
>  		return;
>  
>  	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
> @@ -2535,7 +2535,7 @@ static int x86_pmu_event_idx(struct perf_event *event)
>  {
>  	struct hw_perf_event *hwc = &event->hw;
>  
> -	if (!(hwc->flags & PERF_X86_EVENT_RDPMC_ALLOWED))
> +	if (!(hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT))
>  		return 0;
>  
>  	if (is_metric_idx(hwc->idx))
> @@ -2718,7 +2718,7 @@ void arch_perf_update_userpage(struct perf_event *event,
>  	userpg->cap_user_time = 0;
>  	userpg->cap_user_time_zero = 0;
>  	userpg->cap_user_rdpmc =
> -		!!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED);
> +		!!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT);
>  	userpg->pmc_width = x86_pmu.cntval_bits;
>  
>  	if (!using_native_sched_clock() || !sched_clock_stable())
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index e3ac05c97b5e..49f68b15745f 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -73,7 +73,7 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
>  #define PERF_X86_EVENT_PEBS_NA_HSW	0x0010 /* haswell style datala, unknown */
>  #define PERF_X86_EVENT_EXCL		0x0020 /* HT exclusivity on counter */
>  #define PERF_X86_EVENT_DYNAMIC		0x0040 /* dynamic alloc'd constraint */
> -#define PERF_X86_EVENT_RDPMC_ALLOWED	0x0080 /* grant rdpmc permission */
> +
>  #define PERF_X86_EVENT_EXCL_ACCT	0x0100 /* accounted EXCL event */
>  #define PERF_X86_EVENT_AUTO_RELOAD	0x0200 /* use PEBS auto-reload */
>  #define PERF_X86_EVENT_LARGE_PEBS	0x0400 /* use large PEBS */
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index fe156a8170aa..12debf008d39 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -142,6 +142,8 @@ struct hw_perf_event {
>  			int		event_base_rdpmc;
>  			int		idx;
>  			int		last_cpu;
> +
> +#define PERF_EVENT_FLAG_USER_READ_CNT	0x80000000

I realise this matches the style of PERF_HES_* and PERF_EF_*. but could
we please arrange this like the PERF_PMU_CAP_* definitions, and move
this immediately before the struct hw_perf_event defintion, with a
comment block, e.g.

/*
 * hw_perf_event::flag values
 *
 * PERF_EVENT_FLAG_ARCH bits are reserved for architecture-specific
 * usage.
 */
#define PERF_EVENT_FLAG_ARCH			0x0000ffff
#define PERF_EVENT_FLAG_USER_READ_CNT		0x80000000

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v10 3/5] arm64: perf: Add userspace counter access disable switch
  2021-09-14 20:47 ` [PATCH v10 3/5] arm64: perf: Add userspace counter access disable switch Rob Herring
@ 2021-10-13 17:30   ` Mark Rutland
  0 siblings, 0 replies; 9+ messages in thread
From: Mark Rutland @ 2021-10-13 17:30 UTC (permalink / raw)
  To: Rob Herring
  Cc: Will Deacon, Peter Zijlstra, Ingo Molnar, Catalin Marinas,
	Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang, Ian Rogers,
	Alexander Shishkin, honnappa.nagarahalli, Zachary.Leaf,
	Raphael Gault, Jonathan Cameron, Namhyung Kim, Itaru Kitayama,
	Vince Weaver, linux-arm-kernel, linux-kernel, linux-perf-users

On Tue, Sep 14, 2021 at 03:47:58PM -0500, Rob Herring wrote:
> Like x86, some users may want to disable userspace PMU counter
> altogether. Add a sysctl 'perf_user_access' file to control userspace
> counter access. The default is '0' which is disabled. Writing '1'
> enables access.
> 
> Note that x86 also supports writing '2' to globally enable user access.

For clarity it might be worth mentioning that on x86 this is controlled
by the PMU's `rdpmc` sysfs attribute, i.e.

  Note that x86 supports globally enabling user access by writing '2' to
  /sys/bus/event_source/devices/cpu/rdpmc

> As there's not existing userspace support to worry about, this shouldn't
> be necessary for Arm. It could be added later if the need arises.
> 
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: linux-perf-users@vger.kernel.org
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Rob Herring <robh@kernel.org>
> ---
> v10:
>  - Add documentation
>  - Use a custom handler (needed on the next patch)
> v9:
>  - Use sysctl instead of sysfs attr
>  - Default to disabled
> v8:
>  - New patch
> ---
>  Documentation/admin-guide/sysctl/kernel.rst | 11 +++++++++
>  arch/arm64/kernel/perf_event.c              | 27 +++++++++++++++++++++
>  2 files changed, 38 insertions(+)
> 
> diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
> index 426162009ce9..346a0dba5703 100644
> --- a/Documentation/admin-guide/sysctl/kernel.rst
> +++ b/Documentation/admin-guide/sysctl/kernel.rst
> @@ -905,6 +905,17 @@ enabled, otherwise writing to this file will return ``-EBUSY``.
>  The default value is 8.
>  
>  
> +perf_user_access (arm64 only)
> +=================================
> +
> +Controls user space access for reading perf event counters. When set to 1,
> +user space can read performance monitor counter registers directly.
> +
> +The default value is 0 (access disabled).
> +
> +See Documentation/arm64/perf.rst for more information.

Looking at the existing perf sysctls:

# ls /proc/sys/kernel/perf*
/proc/sys/kernel/perf_cpu_time_max_percent
/proc/sys/kernel/perf_event_max_contexts_per_stack
/proc/sys/kernel/perf_event_max_sample_rate
/proc/sys/kernel/perf_event_max_stack
/proc/sys/kernel/perf_event_mlock_kb
/proc/sys/kernel/perf_event_paranoid

I see that other than `perf_cpu_time_max_percent`, we've used
`perf_event_` as the prefix, and I suspect we should do the same here,
but I guess it may not matter either way.

> +
> +
>  pid_max
>  =======
>  
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index b4044469527e..a8f8dd741aeb 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -286,6 +286,8 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
>  PMU_FORMAT_ATTR(event, "config:0-15");
>  PMU_FORMAT_ATTR(long, "config1:0");
>  
> +static int sysctl_perf_user_access __read_mostly;
> +
>  static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
>  {
>  	return event->attr.config1 & 0x1;
> @@ -1104,6 +1106,29 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>  	return probe.present ? 0 : -ENODEV;
>  }
>  
> +int armv8pmu_proc_user_access_handler(struct ctl_table *table, int write,
> +                void *buffer, size_t *lenp, loff_t *ppos)
> +{
> +	int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
> +	if (ret || !write || sysctl_perf_user_access)
> +		return ret;
> +
> +	return 0;
> +}

Maybe this is needed in the next patch, but the if statement is entirely
redundant on this patch and looks really odd.

Can we please either:

1) Use proc_dointvec_minmax() directly in this patch (which is what Will
Acked in v9) and add the wrapper in the next patch when we need it.

2) make this:

| int armv8pmu_proc_user_access_handler(struct ctl_table *table, int write,
|                 void *buffer, size_t *lenp, loff_t *ppos)
| {
| 	return proc_dointvec_minmax(table, write, buffer, lenp, ppos);
| }

... and flesh it out in the next patch.

With either of those two options (and regardless of whether the
attribute is renamed):

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> +
> +	return 0;
> +}
> +
> +static struct ctl_table armv8_pmu_sysctl_table[] = {
> +	{
> +		.procname       = "perf_user_access",
> +		.data		= &sysctl_perf_user_access,
> +		.maxlen		= sizeof(unsigned int),
> +		.mode           = 0644,
> +		.proc_handler	= armv8pmu_proc_user_access_handler,
> +		.extra1		= SYSCTL_ZERO,
> +		.extra2		= SYSCTL_ONE,
> +	},
> +	{ }
> +};
> +
>  static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
>  			  int (*map_event)(struct perf_event *event),
>  			  const struct attribute_group *events,
> @@ -1136,6 +1161,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
>  	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_CAPS] = caps ?
>  			caps : &armv8_pmuv3_caps_attr_group;
>  
> +	register_sysctl("kernel", armv8_pmu_sysctl_table);
> +
>  	return 0;
>  }
>  
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event
  2021-09-14 20:47 ` [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring
@ 2021-10-14 16:58   ` Mark Rutland
  2021-10-14 19:24     ` Rob Herring
  2021-10-15 15:53     ` Rob Herring
  0 siblings, 2 replies; 9+ messages in thread
From: Mark Rutland @ 2021-10-14 16:58 UTC (permalink / raw)
  To: Rob Herring
  Cc: Will Deacon, Peter Zijlstra, Ingo Molnar, Catalin Marinas,
	Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang, Ian Rogers,
	Alexander Shishkin, honnappa.nagarahalli, Zachary.Leaf,
	Raphael Gault, Jonathan Cameron, Namhyung Kim, Itaru Kitayama,
	Vince Weaver, linux-arm-kernel, linux-kernel, linux-perf-users

Hi Rob,

This looks pretty good!

I have one largish query below, and otherwise only trivialities that I'm
happy to fix up.

On Tue, Sep 14, 2021 at 03:47:59PM -0500, Rob Herring wrote:
> Arm PMUs can support direct userspace access of counters which allows for
> low overhead (i.e. no syscall) self-monitoring of tasks. The same feature
> exists on x86 called 'rdpmc'. Unlike x86, userspace access will only be
> enabled for thread bound events. This could be extended if needed, but
> simplifies the implementation and reduces the chances for any
> information leaks (which the x86 implementation suffers from).
> 
> PMU EL0 access will be enabled when an event with userspace access is
> part of the thread's context. This includes when the event is not
> scheduled on the PMU. There's some additional overhead clearing
> dirty counters when access is enabled in order to prevent leaking
> disabled counter data from other tasks.
> 
> Unlike x86, enabling of userspace access must be requested with a new
> attr bit: config1:1. If the user requests userspace access and 64-bit
> counters, then chaining will be disabled and the user will get the
> maximum size counter the underlying h/w can support. The modes for
> config1 are as follows:
> 
> config1 = 0 : user access disabled and always 32-bit
> config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
> config1 = 2 : user access enabled and always 32-bit
> config1 = 3 : user access enabled and counter size matches underlying counter.

We probably need to note somewhere (i.e. in the next patch) that we mean
*logically* 32-bit, and this could be a biased 64-bit counter, so
userspace needs to treat the upper 32-bits of counters as UNKNOWN.

For the `config1 = 3` case (potentially) overriding the usual long
semantic, I'm struggling to understand why we need that rather than
forcing the use of a 64-bit counter, because in that case:

* For a CPU_CYCLES event:
  __armv8_pmuv3_map_event() will always pick 64-bits
  get_event_idx() may fail to allocate a 64-bit counter.

* For other events:
  __armv8_pmuv3_map_event() will pick 32/64 based on long counter
  support
  get_event_idx() will only fail if there are no counters free.

Whereas if __armv8_pmuv3_map_event() returned an error for the latter
when long counter support is not implemented, we'd have consistent
`long` semantics, and the CPU_CYCLES behaviour would be identical.

What's the rationale for `3` leaving the choice to the kernel?

If the problem is discoverability, I'd be happy to add something to
sysfs to describe whether the PMU has long event support.

> Based on work by Raphael Gault <raphael.gault@arm.com>, but has been
> completely re-written.
> 
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-perf-users@vger.kernel.org
> Signed-off-by: Rob Herring <robh@kernel.org>
> 
> ---
> v10:
>  - Don't control enabling user access based on mmap(). Changing the
>    event_(un)mapped to run on the event's cpu doesn't work for x86.
>    Triggering on mmap() doesn't limit access in any way and complicates
>    the implementation.
>  - Drop dirty counter tracking and just clear all unused counters.
>  - Make the sysctl immediately disable access via IPI.
>  - Merge armv8pmu_event_is_chained() and armv8pmu_event_can_chain()
> 
> v9:
>  - Enabling/disabling of user access is now controlled in .start() and
>    mmap hooks which are now called on CPUs that the event is on.
>    Depends on rework of perf core and x86 RDPMC code posted here:
>    https://lore.kernel.org/lkml/20210728230230.1911468-1-robh@kernel.org/
> 
> v8:
>  - Rework user access tracking and enabling to be done on task
>    context changes using sched_task() hook. This avoids the need for any
>    IPIs, mm_switch hooks or undef instr handler.
>  - Only support user access when explicitly requested on open and
>    only for a thread bound events. This avoids some of the information
>    leaks x86 has and simplifies the implementation.
> 
> v7:
>  - Clear disabled counters when user access is enabled for a task to
>    avoid leaking other tasks counter data.
>  - Rework context switch handling utilizing sched_task callback
>  - Add armv8pmu_event_can_chain() helper
>  - Rework config1 flags handling structure
>  - Use ARMV8_IDX_CYCLE_COUNTER_USER define for remapped user cycle
>    counter index
> 
> v6:
>  - Add new attr.config1 rdpmc bit for userspace to hint it wants
>    userspace access when also requesting 64-bit counters.
> 
> v5:
>  - Only set cap_user_rdpmc if event is on current cpu
>  - Limit enabling/disabling access to CPUs associated with the PMU
>    (supported_cpus) and with the mm_struct matching current->active_mm.
> 
> v2:
>  - Move mapped/unmapped into arm64 code. Fixes arm32.
>  - Rebase on cap_user_time_short changes
> 
> Changes from Raphael's v4:
>   - Drop homogeneous check
>   - Disable access for chained counters
>   - Set pmc_width in user page
> ---
>  arch/arm64/kernel/perf_event.c | 99 +++++++++++++++++++++++++++++++---
>  1 file changed, 92 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index a8f8dd741aeb..91af3d1c254b 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -285,6 +285,7 @@ static const struct attribute_group armv8_pmuv3_events_attr_group = {
>  
>  PMU_FORMAT_ATTR(event, "config:0-15");
>  PMU_FORMAT_ATTR(long, "config1:0");
> +PMU_FORMAT_ATTR(rdpmc, "config1:1");
>  
>  static int sysctl_perf_user_access __read_mostly;
>  
> @@ -293,9 +294,15 @@ static inline bool armv8pmu_event_is_64bit(struct perf_event *event)
>  	return event->attr.config1 & 0x1;
>  }
>  
> +static inline bool armv8pmu_event_want_user_access(struct perf_event *event)
> +{
> +	return event->attr.config1 & 0x2;
> +}
> +
>  static struct attribute *armv8_pmuv3_format_attrs[] = {
>  	&format_attr_event.attr,
>  	&format_attr_long.attr,
> +	&format_attr_rdpmc.attr,
>  	NULL,
>  };
>  
> @@ -364,7 +371,7 @@ static const struct attribute_group armv8_pmuv3_caps_attr_group = {
>   */
>  #define	ARMV8_IDX_CYCLE_COUNTER	0
>  #define	ARMV8_IDX_COUNTER0	1
> -
> +#define	ARMV8_IDX_CYCLE_COUNTER_USER	32
>  
>  /*
>   * We unconditionally enable ARMv8.5-PMU long event counter support
> @@ -379,15 +386,14 @@ static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
>  /*
>   * We must chain two programmable counters for 64 bit events,
>   * except when we have allocated the 64bit cycle counter (for CPU
> - * cycles event). This must be called only when the event has
> - * a counter allocated.
> + * cycles event) or when user space counter access is enabled.
>   */
>  static inline bool armv8pmu_event_is_chained(struct perf_event *event)
>  {
>  	int idx = event->hw.idx;
>  	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
>  
> -	return !WARN_ON(idx < 0) &&
> +	return !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) &&
>  	       armv8pmu_event_is_64bit(event) &&
>  	       !armv8pmu_has_long_event(cpu_pmu) &&
>  	       (idx != ARMV8_IDX_CYCLE_COUNTER);
> @@ -720,6 +726,27 @@ static inline u32 armv8pmu_getreset_flags(void)
>  	return value;

Above this, could we please add:

| static inline bool armv8pmu_event_has_user_read(struct perf_event *event)
| {
| 	return event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT;
| }

... and use that where we look at PERF_EVENT_FLAG_USER_READ_CNT?

>  
> +static void armv8pmu_disable_user_access(void)
> +{
> +	write_sysreg(0, pmuserenr_el0);
> +}
> +
> +static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu)
> +{
> +	int i;
> +	struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
> +
> +	/* Clear any unused counters to avoid leaking their contents */
> +	for_each_clear_bit(i, cpuc->used_mask, cpu_pmu->num_events) {
> +		if (i == ARMV8_IDX_CYCLE_COUNTER)
> +			write_sysreg(0, pmccntr_el0);
> +		else
> +			armv8pmu_write_evcntr(i, 0);
> +	}
> +
> +	write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0);
> +}
> +
>  static void armv8pmu_enable_event(struct perf_event *event)
>  {
>  	/*
> @@ -763,6 +790,14 @@ static void armv8pmu_disable_event(struct perf_event *event)
>  
>  static void armv8pmu_start(struct arm_pmu *cpu_pmu)
>  {
> +	struct perf_event_context *task_ctx =
> +		this_cpu_ptr(cpu_pmu->pmu.pmu_cpu_context)->task_ctx;
> +
> +	if (sysctl_perf_user_access && task_ctx && task_ctx->nr_user)
> +		armv8pmu_enable_user_access(cpu_pmu);
> +	else
> +		armv8pmu_disable_user_access();
> +
>  	/* Enable all counters */
>  	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
>  }
> @@ -880,13 +915,16 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
>  	if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
>  		if (!test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
>  			return ARMV8_IDX_CYCLE_COUNTER;
> +		else if (armv8pmu_event_is_64bit(event) &&
> +			   armv8pmu_event_want_user_access(event) &&
> +			   !armv8pmu_has_long_event(cpu_pmu))
> +				return -EAGAIN;
>  	}
>  
>  	/*
>  	 * Otherwise use events counters
>  	 */
> -	if (armv8pmu_event_is_64bit(event) &&
> -	    !armv8pmu_has_long_event(cpu_pmu))
> +	if (armv8pmu_event_is_chained(event))
>  		return	armv8pmu_get_chain_idx(cpuc, cpu_pmu);
>  	else
>  		return armv8pmu_get_single_idx(cpuc, cpu_pmu);
> @@ -902,6 +940,23 @@ static void armv8pmu_clear_event_idx(struct pmu_hw_events *cpuc,
>  		clear_bit(idx - 1, cpuc->used_mask);
>  }
>  
> +static int armv8pmu_access_event_idx(struct perf_event *event)

Can we please s/access/user/ here?

> +{
> +	if (!sysctl_perf_user_access ||
> +	    !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT))
> +		return 0;
> +
> +	/*
> +	 * We remap the cycle counter index to 32 to
> +	 * match the offset applied to the rest of
> +	 * the counter indices.
> +	 */
> +	if (event->hw.idx == ARMV8_IDX_CYCLE_COUNTER)
> +		return ARMV8_IDX_CYCLE_COUNTER_USER;
> +
> +	return event->hw.idx;
> +}
> +
>  /*
>   * Add an event filter to a given event.
>   */
> @@ -995,9 +1050,23 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
>  				       &armv8_pmuv3_perf_cache_map,
>  				       ARMV8_PMU_EVTYPE_EVENT);
>  
> -	if (armv8pmu_event_is_64bit(event))
> +	/*
> +	 * At this point, the counter is not assigned. If a 64-bit counter is
> +	 * requested, we must make sure the h/w has 64-bit counters if we set
> +	 * the event size to 64-bit because chaining is not supported with
> +	 * userspace access. This may still fail later on if the CPU cycle
> +	 * counter is in use.
> +	 */
> +	if (armv8pmu_event_is_64bit(event) &&
> +	    (!armv8pmu_event_want_user_access(event) ||
> +	     armv8pmu_has_long_event(armpmu) || (hw_event_id == ARMV8_PMUV3_PERFCTR_CPU_CYCLES)))
>  		event->hw.flags |= ARMPMU_EVT_64BIT;

If we can follow my suggestion in reply to the cover text, we can make
this:

	if (armv8pmu_event_is_64bit(event))
		event->hw.flags |= ARMPMU_EVT_64BIT;

	/*
	 * User events must be allocated into a single counter, and so
	 * must not be chained.
	 *
	 * Most 64-bit events require long counter support, but 64-bit
	 * CPU_CYCLES events can be placed into the dedicated cycle
	 * counter when this is free.
	 *
	if (armv8pmu_event_want_user_access()) {
		if (armv8pmu_event_is_64bit(event) &&
		    (hw_event_id != ARMV8_PMUV3_PERFCTR_CPU_CYCLES) &&
		    !armv8pmu_has_long_event(armpmu))
			return -EINVAL;
	}
	
> +	/* Userspace counter access only enabled if requested and a per task event */
> +	if (sysctl_perf_user_access && armv8pmu_event_want_user_access(event) &&
> +	    (event->attach_state & PERF_ATTACH_TASK))
> +		event->hw.flags |= PERF_EVENT_FLAG_USER_READ_CNT;

Can we please explicitly reject !PERF_ATTACH_TASK case?

If the user requested something we don't intend to support, I'd rather
return -EINVAL here, rather than continue on.

Thanks,
Mark.

> +
>  	/* Only expose micro/arch events supported by this PMU */
>  	if ((hw_event_id > 0) && (hw_event_id < ARMV8_PMUV3_MAX_COMMON_EVENTS)
>  	    && test_bit(hw_event_id, armpmu->pmceid_bitmap)) {
> @@ -1106,6 +1175,11 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>  	return probe.present ? 0 : -ENODEV;
>  }
>  
> +static void armv8pmu_disable_user_access_ipi(void *unused)
> +{
> +	armv8pmu_disable_user_access();
> +}
> +
>  int armv8pmu_proc_user_access_handler(struct ctl_table *table, int write,
>                  void *buffer, size_t *lenp, loff_t *ppos)
>  {
> @@ -1113,6 +1187,7 @@ int armv8pmu_proc_user_access_handler(struct ctl_table *table, int write,
>  	if (ret || !write || sysctl_perf_user_access)
>  		return ret;
>  
> +	on_each_cpu(armv8pmu_disable_user_access_ipi, NULL, 1);
>  	return 0;
>  }
>  
> @@ -1152,6 +1227,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
>  	cpu_pmu->set_event_filter	= armv8pmu_set_event_filter;
>  	cpu_pmu->filter_match		= armv8pmu_filter_match;
>  
> +	cpu_pmu->pmu.event_idx		= armv8pmu_access_event_idx;
> +
>  	cpu_pmu->name			= name;
>  	cpu_pmu->map_event		= map_event;
>  	cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] = events ?
> @@ -1328,6 +1405,14 @@ void arch_perf_update_userpage(struct perf_event *event,
>  	userpg->cap_user_time = 0;
>  	userpg->cap_user_time_zero = 0;
>  	userpg->cap_user_time_short = 0;
> +	userpg->cap_user_rdpmc = !!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT);
> +
> +	if (userpg->cap_user_rdpmc) {
> +		if (event->hw.flags & ARMPMU_EVT_64BIT)
> +			userpg->pmc_width = 64;
> +		else
> +			userpg->pmc_width = 32;
> +	}
>  
>  	do {
>  		rd = sched_clock_read_begin(&seq);
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event
  2021-10-14 16:58   ` Mark Rutland
@ 2021-10-14 19:24     ` Rob Herring
  2021-10-19 15:28       ` Mark Rutland
  2021-10-15 15:53     ` Rob Herring
  1 sibling, 1 reply; 9+ messages in thread
From: Rob Herring @ 2021-10-14 19:24 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Will Deacon, Peter Zijlstra, Ingo Molnar, Catalin Marinas,
	Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang, Ian Rogers,
	Alexander Shishkin, Honnappa Nagarahalli, Zachary.Leaf,
	Raphael Gault, Jonathan Cameron, Namhyung Kim, Itaru Kitayama,
	Vince Weaver, linux-arm-kernel, linux-kernel@vger.kernel.org,
	linux-perf-users

On Thu, Oct 14, 2021 at 11:58 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> Hi Rob,
>
> This looks pretty good!
>
> I have one largish query below, and otherwise only trivialities that I'm
> happy to fix up.
>
> On Tue, Sep 14, 2021 at 03:47:59PM -0500, Rob Herring wrote:
> > Arm PMUs can support direct userspace access of counters which allows for
> > low overhead (i.e. no syscall) self-monitoring of tasks. The same feature
> > exists on x86 called 'rdpmc'. Unlike x86, userspace access will only be
> > enabled for thread bound events. This could be extended if needed, but
> > simplifies the implementation and reduces the chances for any
> > information leaks (which the x86 implementation suffers from).
> >
> > PMU EL0 access will be enabled when an event with userspace access is
> > part of the thread's context. This includes when the event is not
> > scheduled on the PMU. There's some additional overhead clearing
> > dirty counters when access is enabled in order to prevent leaking
> > disabled counter data from other tasks.
> >
> > Unlike x86, enabling of userspace access must be requested with a new
> > attr bit: config1:1. If the user requests userspace access and 64-bit
> > counters, then chaining will be disabled and the user will get the
> > maximum size counter the underlying h/w can support. The modes for
> > config1 are as follows:
> >
> > config1 = 0 : user access disabled and always 32-bit
> > config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
> > config1 = 2 : user access enabled and always 32-bit
> > config1 = 3 : user access enabled and counter size matches underlying counter.
>
> We probably need to note somewhere (i.e. in the next patch) that we mean
> *logically* 32-bit, and this could be a biased 64-bit counter, so
> userspace needs to treat the upper 32-bits of counters as UNKNOWN.

Okay, though this detail doesn't matter if the user uses the correct
read loop (now in libperf).

> For the `config1 = 3` case (potentially) overriding the usual long
> semantic, I'm struggling to understand why we need that rather than
> forcing the use of a 64-bit counter, because in that case:
>
> * For a CPU_CYCLES event:
>   __armv8_pmuv3_map_event() will always pick 64-bits
>   get_event_idx() may fail to allocate a 64-bit counter.
>
> * For other events:
>   __armv8_pmuv3_map_event() will pick 32/64 based on long counter
>   support
>   get_event_idx() will only fail if there are no counters free.
>
> Whereas if __armv8_pmuv3_map_event() returned an error for the latter
> when long counter support is not implemented, we'd have consistent
> `long` semantics, and the CPU_CYCLES behaviour would be identical.
>
> What's the rationale for `3` leaving the choice to the kernel?

It's the give me the maximum sized counter the h/w can support choice.
That's easier for userspace to implement. Bit 1 is more of a hint that
the user wants userspace access rather than a requirement.

> If the problem is discoverability, I'd be happy to add something to
> sysfs to describe whether the PMU has long event support.

Checking sysfs or a try for 64-bit support then fall back to 32-bit
support isn't much difference.

Keep in mind that x86 always succeeds here. Every userspace user will
have to add whatever dance we create here. For example, each libperf
test with user access (there's only 2 in my tree, but there's a series
adding more) has to have an '#ifdef __aarch64__' for whatever we do
here. I was seeking to minimize that. Right now, that's just a set
config1 to 0x3. Also, note that libperf will opportunistically use a
userspace read instead of read(). The user just has to mmap the event
and libperf will use a userspace read when enabled which ultimately
depends on what the mmapped page says.

[...]

> > @@ -995,9 +1050,23 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
> >                                      &armv8_pmuv3_perf_cache_map,
> >                                      ARMV8_PMU_EVTYPE_EVENT);
> >
> > -     if (armv8pmu_event_is_64bit(event))
> > +     /*
> > +      * At this point, the counter is not assigned. If a 64-bit counter is
> > +      * requested, we must make sure the h/w has 64-bit counters if we set
> > +      * the event size to 64-bit because chaining is not supported with
> > +      * userspace access. This may still fail later on if the CPU cycle
> > +      * counter is in use.
> > +      */
> > +     if (armv8pmu_event_is_64bit(event) &&
> > +         (!armv8pmu_event_want_user_access(event) ||
> > +          armv8pmu_has_long_event(armpmu) || (hw_event_id == ARMV8_PMUV3_PERFCTR_CPU_CYCLES)))
> >               event->hw.flags |= ARMPMU_EVT_64BIT;
>
> If we can follow my suggestion in reply to the cover text, we can make
> this:
>
>         if (armv8pmu_event_is_64bit(event))
>                 event->hw.flags |= ARMPMU_EVT_64BIT;
>
>         /*
>          * User events must be allocated into a single counter, and so
>          * must not be chained.
>          *
>          * Most 64-bit events require long counter support, but 64-bit
>          * CPU_CYCLES events can be placed into the dedicated cycle
>          * counter when this is free.
>          *
>         if (armv8pmu_event_want_user_access()) {
>                 if (armv8pmu_event_is_64bit(event) &&
>                     (hw_event_id != ARMV8_PMUV3_PERFCTR_CPU_CYCLES) &&
>                     !armv8pmu_has_long_event(armpmu))
>                         return -EINVAL;
>         }
>
> > +     /* Userspace counter access only enabled if requested and a per task event */
> > +     if (sysctl_perf_user_access && armv8pmu_event_want_user_access(event) &&
> > +         (event->attach_state & PERF_ATTACH_TASK))
> > +             event->hw.flags |= PERF_EVENT_FLAG_USER_READ_CNT;
>
> Can we please explicitly reject !PERF_ATTACH_TASK case?
>
> If the user requested something we don't intend to support, I'd rather
> return -EINVAL here, rather than continue on.

This is similar to the 64-bit case though I'm somewhat less concerned
here given per cpu events aren't too useful in this case and the setup
is a bit different already.

Rob

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event
  2021-10-14 16:58   ` Mark Rutland
  2021-10-14 19:24     ` Rob Herring
@ 2021-10-15 15:53     ` Rob Herring
  1 sibling, 0 replies; 9+ messages in thread
From: Rob Herring @ 2021-10-15 15:53 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Will Deacon, Peter Zijlstra, Ingo Molnar, Catalin Marinas,
	Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang, Ian Rogers,
	Alexander Shishkin, Honnappa Nagarahalli, Zachary.Leaf,
	Raphael Gault, Jonathan Cameron, Namhyung Kim, Itaru Kitayama,
	Vince Weaver, linux-arm-kernel, linux-kernel@vger.kernel.org,
	linux-perf-users

On Thu, Oct 14, 2021 at 11:58 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> Hi Rob,
>
> This looks pretty good!
>
> I have one largish query below, and otherwise only trivialities that I'm
> happy to fix up.
>
> On Tue, Sep 14, 2021 at 03:47:59PM -0500, Rob Herring wrote:

[...]

> >  static inline bool armv8pmu_event_is_chained(struct perf_event *event)
> >  {
> >       int idx = event->hw.idx;
> >       struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> >
> > -     return !WARN_ON(idx < 0) &&
> > +     return !(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) &&
> >              armv8pmu_event_is_64bit(event) &&
> >              !armv8pmu_has_long_event(cpu_pmu) &&
> >              (idx != ARMV8_IDX_CYCLE_COUNTER);
> > @@ -720,6 +726,27 @@ static inline u32 armv8pmu_getreset_flags(void)
> >       return value;
>
> Above this, could we please add:
>
> | static inline bool armv8pmu_event_has_user_read(struct perf_event *event)
> | {
> |       return event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT;
> | }
>
> ... and use that where we look at PERF_EVENT_FLAG_USER_READ_CNT?

Sure, but as this is a common flag now, I should probably make that a
common function in linux/perf_event.h and have x86 code use it too.

Rob

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event
  2021-10-14 19:24     ` Rob Herring
@ 2021-10-19 15:28       ` Mark Rutland
  0 siblings, 0 replies; 9+ messages in thread
From: Mark Rutland @ 2021-10-19 15:28 UTC (permalink / raw)
  To: Rob Herring
  Cc: Will Deacon, Peter Zijlstra, Ingo Molnar, Catalin Marinas,
	Arnaldo Carvalho de Melo, Jiri Olsa, Kan Liang, Ian Rogers,
	Alexander Shishkin, Honnappa Nagarahalli, Zachary.Leaf,
	Raphael Gault, Jonathan Cameron, Namhyung Kim, Itaru Kitayama,
	Vince Weaver, linux-arm-kernel, linux-kernel@vger.kernel.org,
	linux-perf-users

Hi Rob,

On Thu, Oct 14, 2021 at 02:24:46PM -0500, Rob Herring wrote:
> On Thu, Oct 14, 2021 at 11:58 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > On Tue, Sep 14, 2021 at 03:47:59PM -0500, Rob Herring wrote:
> > For the `config1 = 3` case (potentially) overriding the usual long
> > semantic, I'm struggling to understand why we need that rather than
> > forcing the use of a 64-bit counter, because in that case:
> >
> > * For a CPU_CYCLES event:
> >   __armv8_pmuv3_map_event() will always pick 64-bits
> >   get_event_idx() may fail to allocate a 64-bit counter.
> >
> > * For other events:
> >   __armv8_pmuv3_map_event() will pick 32/64 based on long counter
> >   support
> >   get_event_idx() will only fail if there are no counters free.
> >
> > Whereas if __armv8_pmuv3_map_event() returned an error for the latter
> > when long counter support is not implemented, we'd have consistent
> > `long` semantics, and the CPU_CYCLES behaviour would be identical.
> >
> > What's the rationale for `3` leaving the choice to the kernel?
> 
> It's the give me the maximum sized counter the h/w can support choice.
> That's easier for userspace to implement. Bit 1 is more of a hint that
> the user wants userspace access rather than a requirement.
> 
> > If the problem is discoverability, I'd be happy to add something to
> > sysfs to describe whether the PMU has long event support.
> 
> Checking sysfs or a try for 64-bit support then fall back to 32-bit
> support isn't much difference.
> 
> Keep in mind that x86 always succeeds here. Every userspace user will
> have to add whatever dance we create here. For example, each libperf
> test with user access (there's only 2 in my tree, but there's a series
> adding more) has to have an '#ifdef __aarch64__' for whatever we do
> here. I was seeking to minimize that. Right now, that's just a set
> config1 to 0x3. Also, note that libperf will opportunistically use a
> userspace read instead of read(). The user just has to mmap the event
> and libperf will use a userspace read when enabled which ultimately
> depends on what the mmapped page says.

I think that x86 always succeeding here is more of a legacy thing that
they're stuck with rather than a design to be copied.

I'd prefer to keep the existing meaning of the `long` flag to mean "give
me 64 bits of counter, somehow", with `rdpmc` meaning "give me a single
counter I can access from userspace", even if that means the combination
of the two can sometimes be rejected. As you say, we can probe for that
as necessary by trying `long` then falling back to a plain event, and if
that ends up being a bottleneck somehow we can figure out a way of
advertising support to userspace. Regardless, we should 

Importantly, I don't think libperf should override a user's request for
`long`, since the user may want to optimize for minimal perturbation
rather than faster access.

If we want a "please give me the longest counter that's compatible with
other constraints", I think that should be a new flag e.g. `trylong`,
and shouldn't override the existing `long`. We can add that as a
follow-up if we want it.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-10-19 15:28 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20210914204800.3945732-1-robh@kernel.org>
2021-09-14 20:47 ` [PATCH v10 1/5] x86: perf: Move RDPMC event flag to a common definition Rob Herring
2021-10-13 17:25   ` Mark Rutland
2021-09-14 20:47 ` [PATCH v10 3/5] arm64: perf: Add userspace counter access disable switch Rob Herring
2021-10-13 17:30   ` Mark Rutland
2021-09-14 20:47 ` [PATCH v10 4/5] arm64: perf: Enable PMU counter userspace access for perf event Rob Herring
2021-10-14 16:58   ` Mark Rutland
2021-10-14 19:24     ` Rob Herring
2021-10-19 15:28       ` Mark Rutland
2021-10-15 15:53     ` Rob Herring

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).