linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT
@ 2025-01-07  4:07 Charlie Jenkins
  2025-01-07  4:07 ` [PATCH v3 1/3] " Charlie Jenkins
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Charlie Jenkins @ 2025-01-07  4:07 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
	Ian Rogers, Adrian Hunter, Atish Patra
  Cc: linux-perf-users, linux-kernel, Charlie Jenkins,
	Shunsuke Nakamura

Introduce a new perf ioctl key PERF_EVENT_IOC_INC_EVENT_LIMIT that
functions the same as PERF_EVENT_IOC_REFRESH, except it does not
immediately enable counters.

Also create a libperf API perf_evsel__refresh() to allow libperf users
access to this ioctl key.

Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
---
This series is going to cause an issue conflict with another series I
sent [1]. The final patch of this series changes perf_evsel__ioctl() to
accept a unsigned long instead of void *. My preference would be for the
following patch to be squashed onto "libperf: Add perf_evsel__refresh()
function" when applied:

From 66ab7b57c8b5a94c02c8d82204338b0ebca48bc5 Mon Sep 17 00:00:00 2001
From: Charlie Jenkins <charlie@rivosinc.com>
Date: Mon, 6 Jan 2025 20:00:28 -0800
Subject: [PATCH] libperf: Fixup perf_evsel__get_id

This patch should be squashed onto
"libperf: Add perf_evsel__refresh() function" or
"libperf: Add perf_evsel__id() function" when merging.

Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
---
 tools/lib/perf/evsel.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/perf/evsel.c b/tools/lib/perf/evsel.c
index 1cd1680d28d7..8690588c0ba1 100644
--- a/tools/lib/perf/evsel.c
+++ b/tools/lib/perf/evsel.c
@@ -521,7 +521,7 @@ int perf_evsel__period(struct perf_evsel *evsel, u64 period)
 
 static int perf_evsel__get_id(struct perf_evsel *evsel, int cpu_map_idx, int thread, u64 *id)
 {
-	return perf_evsel__ioctl(evsel, PERF_EVENT_IOC_ID, id, cpu_map_idx, thread);
+	return perf_evsel__ioctl(evsel, PERF_EVENT_IOC_ID, (unsigned long)id, cpu_map_idx, thread);
 }
 
 int perf_evsel__id(struct perf_evsel *evsel, u64 *ids[])
-- 
2.34.1

[1] https://lore.kernel.org/lkml/20250106-perf_evsel_get_id-v3-1-44eca9194f1e@rivosinc.com/T/#u

Changes in v3:
- Use uint64_t instead of __u64 for consistency
- Link to v2: https://lore.kernel.org/r/20240807-perf_set_event_limit-v2-0-823b78d04c76@rivosinc.com

Changes in v2:
- Drop discussion about signal race condition
- Add new patch "libperf: Add perf_evsel__refresh() function"
- This newly added patch was pulled from a different series with
  modifications to fit the new ioctl key
-
https://lore.kernel.org/lkml/20240726-overflow_check_libperf-v2-0-7d154dcf6bea@rivosinc.com/
will be updated
- Link to v1: https://lore.kernel.org/r/20240724-perf_set_event_limit-v1-0-e680c93eca55@rivosinc.com

---
Charlie Jenkins (3):
      perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT
      perf: Document PERF_EVENT_IOC_INC_EVENT_LIMIT
      libperf: Add perf_evsel__refresh() function

 include/linux/perf_event.h               |  4 +--
 include/uapi/linux/perf_event.h          |  1 +
 kernel/events/core.c                     | 17 +++++++----
 tools/include/uapi/linux/perf_event.h    |  1 +
 tools/lib/perf/Documentation/libperf.txt |  2 ++
 tools/lib/perf/evsel.c                   | 49 ++++++++++++++++++++++++++------
 tools/lib/perf/include/perf/evsel.h      |  2 ++
 tools/lib/perf/libperf.map               |  2 ++
 tools/perf/design.txt                    |  5 ++++
 9 files changed, 67 insertions(+), 16 deletions(-)
---
base-commit: ed60738a9b7ede4a4ae797d90be7fde3e10a36c7
change-id: 20240724-perf_set_event_limit-079f1b996376
-- 
- Charlie


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 1/3] perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT
  2025-01-07  4:07 [PATCH v3 0/3] perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT Charlie Jenkins
@ 2025-01-07  4:07 ` Charlie Jenkins
  2025-01-13 13:18   ` Peter Zijlstra
  2025-01-07  4:07 ` [PATCH v3 2/3] perf: Document PERF_EVENT_IOC_INC_EVENT_LIMIT Charlie Jenkins
  2025-01-07  4:07 ` [PATCH v3 3/3] libperf: Add perf_evsel__refresh() function Charlie Jenkins
  2 siblings, 1 reply; 6+ messages in thread
From: Charlie Jenkins @ 2025-01-07  4:07 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
	Ian Rogers, Adrian Hunter, Atish Patra
  Cc: linux-perf-users, linux-kernel, Charlie Jenkins

PERF_EVENT_IOC_REFRESH immediately enables after incrementing
event_limit.  Provide a new ioctl flag that allows programs to increment
event_limit without enabling the event. A usecase for this is to set an
event_limit in combination with enable_on_exec.

Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
---
 include/linux/perf_event.h            |  4 ++--
 include/uapi/linux/perf_event.h       |  1 +
 kernel/events/core.c                  | 17 +++++++++++------
 tools/include/uapi/linux/perf_event.h |  1 +
 4 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index cb99ec8c9e96f63c64eeeb4470c019d86bc6e50f..9b407d51818f59e1548f8936ec0bb185fc4d9f0c 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1147,7 +1147,7 @@ extern int perf_event_task_enable(void);
 
 extern void perf_pmu_resched(struct pmu *pmu);
 
-extern int perf_event_refresh(struct perf_event *event, int refresh);
+extern int perf_event_refresh(struct perf_event *event, int refresh, bool enable);
 extern void perf_event_update_userpage(struct perf_event *event);
 extern int perf_event_release_kernel(struct perf_event *event);
 extern struct perf_event *
@@ -1835,7 +1835,7 @@ static inline int perf_event_read_local(struct perf_event *event, u64 *value,
 static inline void perf_event_print_debug(void)				{ }
 static inline int perf_event_task_disable(void)				{ return -EINVAL; }
 static inline int perf_event_task_enable(void)				{ return -EINVAL; }
-static inline int perf_event_refresh(struct perf_event *event, int refresh)
+static inline int perf_event_refresh(struct perf_event *event, int refresh, bool enable)
 {
 	return -EINVAL;
 }
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 0524d541d4e3d50150da03186467382bc60bdf50..5eeccd57078c95f08f6ac401b1e38c2e84e86d9a 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -569,6 +569,7 @@ struct perf_event_query_bpf {
 #define PERF_EVENT_IOC_PAUSE_OUTPUT		_IOW('$', 9, __u32)
 #define PERF_EVENT_IOC_QUERY_BPF		_IOWR('$', 10, struct perf_event_query_bpf *)
 #define PERF_EVENT_IOC_MODIFY_ATTRIBUTES	_IOW('$', 11, struct perf_event_attr *)
+#define PERF_EVENT_IOC_INC_EVENT_LIMIT		_IO ('$', 12)
 
 enum perf_event_ioc_flags {
 	PERF_IOC_FLAG_GROUP		= 1U << 0,
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 065f9188b44a0d8ee66cc76314ae247dbe45cb57..6514065513ba01e99c88854f3947a114562e81ad 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3210,7 +3210,7 @@ void perf_event_addr_filters_sync(struct perf_event *event)
 }
 EXPORT_SYMBOL_GPL(perf_event_addr_filters_sync);
 
-static int _perf_event_refresh(struct perf_event *event, int refresh)
+static int _perf_event_refresh(struct perf_event *event, int refresh, bool enable)
 {
 	/*
 	 * not supported on inherited events
@@ -3219,7 +3219,8 @@ static int _perf_event_refresh(struct perf_event *event, int refresh)
 		return -EINVAL;
 
 	atomic_add(refresh, &event->event_limit);
-	_perf_event_enable(event);
+	if (enable)
+		_perf_event_enable(event);
 
 	return 0;
 }
@@ -3227,13 +3228,13 @@ static int _perf_event_refresh(struct perf_event *event, int refresh)
 /*
  * See perf_event_disable()
  */
-int perf_event_refresh(struct perf_event *event, int refresh)
+int perf_event_refresh(struct perf_event *event, int refresh, bool enable)
 {
 	struct perf_event_context *ctx;
 	int ret;
 
 	ctx = perf_event_ctx_lock(event);
-	ret = _perf_event_refresh(event, refresh);
+	ret = _perf_event_refresh(event, refresh, enable);
 	perf_event_ctx_unlock(event, ctx);
 
 	return ret;
@@ -6019,7 +6020,7 @@ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned lon
 		break;
 
 	case PERF_EVENT_IOC_REFRESH:
-		return _perf_event_refresh(event, arg);
+		return _perf_event_refresh(event, arg, true);
 
 	case PERF_EVENT_IOC_PERIOD:
 	{
@@ -6099,6 +6100,10 @@ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned lon
 
 		return perf_event_modify_attr(event,  &new_attr);
 	}
+
+	case PERF_EVENT_IOC_INC_EVENT_LIMIT:
+		return _perf_event_refresh(event, arg, false);
+
 	default:
 		return -ENOTTY;
 	}
@@ -6820,7 +6825,7 @@ void perf_event_wakeup(struct perf_event *event)
 	ring_buffer_wakeup(event);
 
 	if (event->pending_kill) {
-		kill_fasync(perf_event_fasync(event), SIGIO, event->pending_kill);
+		kill_fasync(perf_event_fasync(event), SIGTRAP, event->pending_kill);
 		event->pending_kill = 0;
 	}
 }
diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
index 0524d541d4e3d50150da03186467382bc60bdf50..5eeccd57078c95f08f6ac401b1e38c2e84e86d9a 100644
--- a/tools/include/uapi/linux/perf_event.h
+++ b/tools/include/uapi/linux/perf_event.h
@@ -569,6 +569,7 @@ struct perf_event_query_bpf {
 #define PERF_EVENT_IOC_PAUSE_OUTPUT		_IOW('$', 9, __u32)
 #define PERF_EVENT_IOC_QUERY_BPF		_IOWR('$', 10, struct perf_event_query_bpf *)
 #define PERF_EVENT_IOC_MODIFY_ATTRIBUTES	_IOW('$', 11, struct perf_event_attr *)
+#define PERF_EVENT_IOC_INC_EVENT_LIMIT		_IO ('$', 12)
 
 enum perf_event_ioc_flags {
 	PERF_IOC_FLAG_GROUP		= 1U << 0,

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 2/3] perf: Document PERF_EVENT_IOC_INC_EVENT_LIMIT
  2025-01-07  4:07 [PATCH v3 0/3] perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT Charlie Jenkins
  2025-01-07  4:07 ` [PATCH v3 1/3] " Charlie Jenkins
@ 2025-01-07  4:07 ` Charlie Jenkins
  2025-01-07  4:07 ` [PATCH v3 3/3] libperf: Add perf_evsel__refresh() function Charlie Jenkins
  2 siblings, 0 replies; 6+ messages in thread
From: Charlie Jenkins @ 2025-01-07  4:07 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
	Ian Rogers, Adrian Hunter, Atish Patra
  Cc: linux-perf-users, linux-kernel, Charlie Jenkins

Introduce PERF_EVENT_IOC_INC_EVENT_LIMIT and explain the differences
between it and PERF_EVENT_IOC_REFRESH.

Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
---
 tools/perf/design.txt | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/perf/design.txt b/tools/perf/design.txt
index aa8cfeabb7432d4011005f34a981fa3381caab09..1626ae83785a5773c255cc5219c2e9e0525dda19 100644
--- a/tools/perf/design.txt
+++ b/tools/perf/design.txt
@@ -439,6 +439,11 @@ Additionally, non-inherited overflow counters can use
 
 to enable a counter for 'nr' events, after which it gets disabled again.
 
+PERF_EVENT_IOC_REFRESH will increment the event limit by 'nr' and enable the
+event. To increment the event limit without enabling it, use the following:
+
+	ioctl(fd, PERF_EVENT_IOC_INC_EVENT_LIMIT, nr);
+
 A process can enable or disable all the counter groups that are
 attached to it, using prctl:
 

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 3/3] libperf: Add perf_evsel__refresh() function
  2025-01-07  4:07 [PATCH v3 0/3] perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT Charlie Jenkins
  2025-01-07  4:07 ` [PATCH v3 1/3] " Charlie Jenkins
  2025-01-07  4:07 ` [PATCH v3 2/3] perf: Document PERF_EVENT_IOC_INC_EVENT_LIMIT Charlie Jenkins
@ 2025-01-07  4:07 ` Charlie Jenkins
  2 siblings, 0 replies; 6+ messages in thread
From: Charlie Jenkins @ 2025-01-07  4:07 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
	Ian Rogers, Adrian Hunter, Atish Patra
  Cc: linux-perf-users, linux-kernel, Charlie Jenkins,
	Shunsuke Nakamura

Introduce perf_evsel__refresh() to increment the overflow limit. Can
optionally enable the event immediately.

Co-developed-by: Shunsuke Nakamura <nakamura.shun@fujitsu.com>
Signed-off-by: Shunsuke Nakamura <nakamura.shun@fujitsu.com>
Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
---
 tools/lib/perf/Documentation/libperf.txt |  2 ++
 tools/lib/perf/evsel.c                   | 49 ++++++++++++++++++++++++++------
 tools/lib/perf/include/perf/evsel.h      |  2 ++
 tools/lib/perf/libperf.map               |  2 ++
 4 files changed, 47 insertions(+), 8 deletions(-)

diff --git a/tools/lib/perf/Documentation/libperf.txt b/tools/lib/perf/Documentation/libperf.txt
index 59aabdd3cabff19c9a4835d9d20a74c6087d9a06..d557cb0279853c82dfc65533ba230255fced64bc 100644
--- a/tools/lib/perf/Documentation/libperf.txt
+++ b/tools/lib/perf/Documentation/libperf.txt
@@ -146,6 +146,8 @@ SYNOPSIS
   int perf_evsel__enable_cpu(struct perf_evsel *evsel, int cpu_map_idx);
   int perf_evsel__disable(struct perf_evsel *evsel);
   int perf_evsel__disable_cpu(struct perf_evsel *evsel, int cpu_map_idx);
+  int perf_evsel__refresh(struct perf_evsel *evsel, int refresh, bool enable);
+  int perf_evsel__period(struct perf_evsel *evsel, uint64_t period);
   struct perf_cpu_map *perf_evsel__cpus(struct perf_evsel *evsel);
   struct perf_thread_map *perf_evsel__threads(struct perf_evsel *evsel);
   struct perf_event_attr *perf_evsel__attr(struct perf_evsel *evsel);
diff --git a/tools/lib/perf/evsel.c b/tools/lib/perf/evsel.c
index c475319e2e410d31d072d81afb9e6277b16ef1f1..92633d426c9439afd7a8cde70dde8320d6a2efa3 100644
--- a/tools/lib/perf/evsel.c
+++ b/tools/lib/perf/evsel.c
@@ -19,6 +19,7 @@
 #include <sys/ioctl.h>
 #include <sys/mman.h>
 #include <asm/bug.h>
+#include "internal.h"
 
 void perf_evsel__init(struct perf_evsel *evsel, struct perf_event_attr *attr,
 		      int idx)
@@ -414,7 +415,7 @@ int perf_evsel__read(struct perf_evsel *evsel, int cpu_map_idx, int thread,
 	return 0;
 }
 
-static int perf_evsel__ioctl(struct perf_evsel *evsel, int ioc, void *arg,
+static int perf_evsel__ioctl(struct perf_evsel *evsel, int ioc, unsigned long arg,
 			     int cpu_map_idx, int thread)
 {
 	int *fd = FD(evsel, cpu_map_idx, thread);
@@ -426,7 +427,7 @@ static int perf_evsel__ioctl(struct perf_evsel *evsel, int ioc, void *arg,
 }
 
 static int perf_evsel__run_ioctl(struct perf_evsel *evsel,
-				 int ioc,  void *arg,
+				 int ioc, unsigned long arg,
 				 int cpu_map_idx)
 {
 	int thread;
@@ -443,7 +444,7 @@ static int perf_evsel__run_ioctl(struct perf_evsel *evsel,
 
 int perf_evsel__enable_cpu(struct perf_evsel *evsel, int cpu_map_idx)
 {
-	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, NULL, cpu_map_idx);
+	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0, cpu_map_idx);
 }
 
 int perf_evsel__enable_thread(struct perf_evsel *evsel, int thread)
@@ -453,7 +454,7 @@ int perf_evsel__enable_thread(struct perf_evsel *evsel, int thread)
 	int err;
 
 	perf_cpu_map__for_each_cpu(cpu, idx, evsel->cpus) {
-		err = perf_evsel__ioctl(evsel, PERF_EVENT_IOC_ENABLE, NULL, idx, thread);
+		err = perf_evsel__ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0, idx, thread);
 		if (err)
 			return err;
 	}
@@ -467,13 +468,13 @@ int perf_evsel__enable(struct perf_evsel *evsel)
 	int err = 0;
 
 	for (i = 0; i < xyarray__max_x(evsel->fd) && !err; i++)
-		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, NULL, i);
+		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0, i);
 	return err;
 }
 
 int perf_evsel__disable_cpu(struct perf_evsel *evsel, int cpu_map_idx)
 {
-	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, NULL, cpu_map_idx);
+	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0, cpu_map_idx);
 }
 
 int perf_evsel__disable(struct perf_evsel *evsel)
@@ -482,7 +483,39 @@ int perf_evsel__disable(struct perf_evsel *evsel)
 	int err = 0;
 
 	for (i = 0; i < xyarray__max_x(evsel->fd) && !err; i++)
-		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, NULL, i);
+		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0, i);
+	return err;
+}
+
+int perf_evsel__refresh(struct perf_evsel *evsel, int refresh, bool enable)
+{
+	int i, ioc;
+	int err = 0;
+
+	ioc = enable ? PERF_EVENT_IOC_REFRESH : PERF_EVENT_IOC_INC_EVENT_LIMIT;
+
+	for (i = 0; i < xyarray__max_x(evsel->fd) && !err; i++)
+		err = perf_evsel__run_ioctl(evsel, ioc, refresh, i);
+	return err;
+}
+
+int perf_evsel__period(struct perf_evsel *evsel, uint64_t period)
+{
+	struct perf_event_attr *attr;
+	int i;
+	int err = 0;
+
+	attr = perf_evsel__attr(evsel);
+
+	for (i = 0; i < xyarray__max_x(evsel->fd); i++) {
+		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_PERIOD,
+					    (unsigned long)&period, i);
+		if (err)
+			return err;
+	}
+
+	attr->sample_period = period;
+
 	return err;
 }
 
@@ -493,7 +526,7 @@ int perf_evsel__apply_filter(struct perf_evsel *evsel, const char *filter)
 	for (i = 0; i < perf_cpu_map__nr(evsel->cpus) && !err; i++)
 		err = perf_evsel__run_ioctl(evsel,
 				     PERF_EVENT_IOC_SET_FILTER,
-				     (void *)filter, i);
+				     (unsigned long)filter, i);
 	return err;
 }
 
diff --git a/tools/lib/perf/include/perf/evsel.h b/tools/lib/perf/include/perf/evsel.h
index 6f92204075c244bc623b26dc2c97fa4c835a4228..19d30bab8fe1e2d860b432855bc01958ba5e54c8 100644
--- a/tools/lib/perf/include/perf/evsel.h
+++ b/tools/lib/perf/include/perf/evsel.h
@@ -40,6 +40,8 @@ LIBPERF_API int perf_evsel__enable(struct perf_evsel *evsel);
 LIBPERF_API int perf_evsel__enable_cpu(struct perf_evsel *evsel, int cpu_map_idx);
 LIBPERF_API int perf_evsel__enable_thread(struct perf_evsel *evsel, int thread);
 LIBPERF_API int perf_evsel__disable(struct perf_evsel *evsel);
+LIBPERF_API int perf_evsel__refresh(struct perf_evsel *evsel, int refresh, bool enable);
+LIBPERF_API int perf_evsel__period(struct perf_evsel *evsel, uint64_t period);
 LIBPERF_API int perf_evsel__disable_cpu(struct perf_evsel *evsel, int cpu_map_idx);
 LIBPERF_API struct perf_cpu_map *perf_evsel__cpus(struct perf_evsel *evsel);
 LIBPERF_API struct perf_thread_map *perf_evsel__threads(struct perf_evsel *evsel);
diff --git a/tools/lib/perf/libperf.map b/tools/lib/perf/libperf.map
index fdd8304fe9d05862376c122d68777a6bf6a03d80..d67299bd5216b02e09a47a1b1ef947eef460cf17 100644
--- a/tools/lib/perf/libperf.map
+++ b/tools/lib/perf/libperf.map
@@ -33,6 +33,8 @@ LIBPERF_0.0.1 {
 		perf_evsel__munmap;
 		perf_evsel__mmap_base;
 		perf_evsel__read;
+		perf_evsel__refresh;
+		perf_evsel__period;
 		perf_evsel__cpus;
 		perf_evsel__threads;
 		perf_evsel__attr;

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/3] perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT
  2025-01-07  4:07 ` [PATCH v3 1/3] " Charlie Jenkins
@ 2025-01-13 13:18   ` Peter Zijlstra
  2025-01-13 18:59     ` Charlie Jenkins
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2025-01-13 13:18 UTC (permalink / raw)
  To: Charlie Jenkins
  Cc: Ingo Molnar, Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	Atish Patra, linux-perf-users, linux-kernel

On Mon, Jan 06, 2025 at 08:07:32PM -0800, Charlie Jenkins wrote:
> PERF_EVENT_IOC_REFRESH immediately enables after incrementing
> event_limit.  Provide a new ioctl flag that allows programs to increment
> event_limit without enabling the event. A usecase for this is to set an
> event_limit in combination with enable_on_exec.


Utter lack of WHY.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/3] perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT
  2025-01-13 13:18   ` Peter Zijlstra
@ 2025-01-13 18:59     ` Charlie Jenkins
  0 siblings, 0 replies; 6+ messages in thread
From: Charlie Jenkins @ 2025-01-13 18:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	Atish Patra, linux-perf-users, linux-kernel

On Mon, Jan 13, 2025 at 02:18:12PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 06, 2025 at 08:07:32PM -0800, Charlie Jenkins wrote:
> > PERF_EVENT_IOC_REFRESH immediately enables after incrementing
> > event_limit.  Provide a new ioctl flag that allows programs to increment
> > event_limit without enabling the event. A usecase for this is to set an
> > event_limit in combination with enable_on_exec.
> 
> 
> Utter lack of WHY.

My usecase is being able to use libperf to calculate performance metrics
from running a program for X number of instructions.
PERF_EVENT_IOC_REFRESH can be used to stop the counters after the
instruction sampling threshold has been hit. This requires being able to
start PERF_EVENT_IOC_REFRESH as soon as the program starts executing,
which is a perfect application for enable_on_exec.

- Charlie


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-01-13 18:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-07  4:07 [PATCH v3 0/3] perf: Add PERF_EVENT_IOC_INC_EVENT_LIMIT Charlie Jenkins
2025-01-07  4:07 ` [PATCH v3 1/3] " Charlie Jenkins
2025-01-13 13:18   ` Peter Zijlstra
2025-01-13 18:59     ` Charlie Jenkins
2025-01-07  4:07 ` [PATCH v3 2/3] perf: Document PERF_EVENT_IOC_INC_EVENT_LIMIT Charlie Jenkins
2025-01-07  4:07 ` [PATCH v3 3/3] libperf: Add perf_evsel__refresh() function Charlie Jenkins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).