linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 00/15] perf: Fix the throttle logic for group
@ 2025-05-14 15:13 kan.liang
  2025-05-14 15:13 ` [PATCH V2 01/15] perf: Fix the throttle logic for a group kan.liang
                   ` (14 more replies)
  0 siblings, 15 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

Changes since V1:
- Apply the suggested throttle/unthrottle functions from Peter.
  The MAX_INTERRUPTS and throttle logs are applied to all events.
- Update the description and comments accordingly
- Add Reviewed-by from Ravi and Max

The sampling read doesn't work well with a group.
The issue was originally found by the 'Basic leader sampling test' case
failed on s390.
https://lore.kernel.org/all/20250228062241.303309-1-tmricht@linux.ibm.com/

Stephane debugged it and found it was caused by the throttling logic.
https://lore.kernel.org/all/CABPqkBQzCMNS_PfLZBWVuX9o8Z55PovwJvpVWMWzyeExFJ5R4Q@mail.gmail.com/

The throttle logic is generic and shared by all ARCHs.
It also impacts other ARCHs, e.g., X86.

On an Intel GNR machine,
$ perf record -e "{cycles,cycles}:S" ...

$ perf report -D | grep THROTTLE | tail -2
            THROTTLE events:        426  ( 9.0%)
          UNTHROTTLE events:        425  ( 9.0%)

$ perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
0 1020120874009167 0x74970 [0x68]: PERF_RECORD_SAMPLE(IP, 0x1):
... sample_read:
.... group nr 2
..... id 0000000000000327, value 000000000cbb993a, lost 0
..... id 0000000000000328, value 00000002211c26df, lost 0

The patch set tries to provide a generic fix for the group throttle
support. So the buggy driver-specific implementation can be removed.

The patch set is only verified on newer Intel platforms.

Kan Liang (15):
  perf: Fix the throttle logic for a group
  perf/x86/intel: Remove driver-specific throttle support
  perf/x86/amd: Remove driver-specific throttle support
  perf/x86/zhaoxin: Remove driver-specific throttle support
  powerpc/perf: Remove driver-specific throttle support
  s390/perf: Remove driver-specific throttle support
  perf/arm: Remove driver-specific throttle support
  perf/apple_m1: Remove driver-specific throttle support
  alpha/perf: Remove driver-specific throttle support
  arc/perf: Remove driver-specific throttle support
  csky/perf: Remove driver-specific throttle support
  loongarch/perf: Remove driver-specific throttle support
  sparc/perf: Remove driver-specific throttle support
  xtensa/perf: Remove driver-specific throttle support
  mips/perf: Remove driver-specific throttle support

 arch/alpha/kernel/perf_event.c       | 11 ++----
 arch/arc/kernel/perf_event.c         |  6 +--
 arch/csky/kernel/perf_event.c        |  3 +-
 arch/loongarch/kernel/perf_event.c   |  3 +-
 arch/mips/kernel/perf_event_mipsxx.c |  3 +-
 arch/powerpc/perf/core-book3s.c      |  6 +--
 arch/powerpc/perf/core-fsl-emb.c     |  3 +-
 arch/s390/kernel/perf_cpum_cf.c      |  2 -
 arch/s390/kernel/perf_cpum_sf.c      |  5 +--
 arch/sparc/kernel/perf_event.c       |  3 +-
 arch/x86/events/amd/core.c           |  3 +-
 arch/x86/events/amd/ibs.c            |  4 +-
 arch/x86/events/core.c               |  3 +-
 arch/x86/events/intel/core.c         |  6 +--
 arch/x86/events/intel/ds.c           |  7 ++--
 arch/x86/events/intel/knc.c          |  3 +-
 arch/x86/events/intel/p4.c           |  3 +-
 arch/x86/events/zhaoxin/core.c       |  3 +-
 arch/xtensa/kernel/perf_event.c      |  3 +-
 drivers/perf/apple_m1_cpu_pmu.c      |  3 +-
 drivers/perf/arm_pmuv3.c             |  3 +-
 drivers/perf/arm_v6_pmu.c            |  3 +-
 drivers/perf/arm_v7_pmu.c            |  3 +-
 drivers/perf/arm_xscale_pmu.c        |  6 +--
 kernel/events/core.c                 | 58 +++++++++++++++++++++-------
 25 files changed, 75 insertions(+), 81 deletions(-)

-- 
2.38.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH V2 01/15] perf: Fix the throttle logic for a group
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-15  9:43   ` Leo Yan
  2025-05-14 15:13 ` [PATCH V2 02/15] perf/x86/intel: Remove driver-specific throttle support kan.liang
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The current throttle logic doesn't work well with a group, e.g., the
following sampling-read case.

$ perf record -e "{cycles,cycles}:S" ...

$ perf report -D | grep THROTTLE | tail -2
            THROTTLE events:        426  ( 9.0%)
          UNTHROTTLE events:        425  ( 9.0%)

$ perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
0 1020120874009167 0x74970 [0x68]: PERF_RECORD_SAMPLE(IP, 0x1):
... sample_read:
.... group nr 2
..... id 0000000000000327, value 000000000cbb993a, lost 0
..... id 0000000000000328, value 00000002211c26df, lost 0

The second cycles event has a much larger value than the first cycles
event in the same group.

The current throttle logic in the generic code only logs the THROTTLE
event. It relies on the specific driver implementation to disable
events. For all ARCHs, the implementation is similar. Only the event is
disabled, rather than the group.

The logic to disable the group should be generic for all ARCHs. Add the
logic in the generic code. The following patch will remove the buggy
driver-specific implementation.

The throttle only happens when an event is overflowed. Stop the entire
group when any event in the group triggers the throttle.
The MAX_INTERRUPTS is set to all throttle events.

The unthrottled could happen in 3 places.
- event/group sched. All events in the group are scheduled one by one.
  All of them will be unthrottled eventually. Nothing needs to be
  changed.
- The perf_adjust_freq_unthr_events for each tick. Needs to restart the
  group altogether.
- The __perf_event_period(). The whole group needs to be restarted
  altogether as well.

With the fix,
$ sudo perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
0 3573470770332 0x12f5f8 [0x70]: PERF_RECORD_SAMPLE(IP, 0x2):
... sample_read:
.... group nr 2
..... id 0000000000000a28, value 00000004fd3dfd8f, lost 0
..... id 0000000000000a29, value 00000004fd3dfd8f, lost 0

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---

Changes since V1:
- Apply the suggested throttle/unthrottle functions from Peter.
  The MAX_INTERRUPTS and throttle logs are applied to all events.
- Update the description and comments accordingly

 kernel/events/core.c | 58 +++++++++++++++++++++++++++++++++-----------
 1 file changed, 44 insertions(+), 14 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index a84abc2b7f20..a270fcda766d 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2734,6 +2734,39 @@ void perf_event_disable_inatomic(struct perf_event *event)
 static void perf_log_throttle(struct perf_event *event, int enable);
 static void perf_log_itrace_start(struct perf_event *event);
 
+static void perf_event_unthrottle(struct perf_event *event, bool start)
+{
+	event->hw.interrupts = 0;
+	if (start)
+		event->pmu->start(event, 0);
+	perf_log_throttle(event, 1);
+}
+
+static void perf_event_throttle(struct perf_event *event)
+{
+	event->pmu->stop(event, 0);
+	event->hw.interrupts = MAX_INTERRUPTS;
+	perf_log_throttle(event, 0);
+}
+
+static void perf_event_unthrottle_group(struct perf_event *event, bool start)
+{
+	struct perf_event *sibling, *leader = event->group_leader;
+
+	perf_event_unthrottle(leader, leader != event || start);
+	for_each_sibling_event(sibling, leader)
+		perf_event_unthrottle(sibling, sibling != event || start);
+}
+
+static void perf_event_throttle_group(struct perf_event *event)
+{
+	struct perf_event *sibling, *leader = event->group_leader;
+
+	perf_event_throttle(leader);
+	for_each_sibling_event(sibling, leader)
+		perf_event_throttle(sibling);
+}
+
 static int
 event_sched_in(struct perf_event *event, struct perf_event_context *ctx)
 {
@@ -4389,10 +4422,8 @@ static void perf_adjust_freq_unthr_events(struct list_head *event_list)
 		hwc = &event->hw;
 
 		if (hwc->interrupts == MAX_INTERRUPTS) {
-			hwc->interrupts = 0;
-			perf_log_throttle(event, 1);
-			if (!event->attr.freq || !event->attr.sample_freq)
-				event->pmu->start(event, 0);
+			perf_event_unthrottle_group(event,
+				!event->attr.freq || !event->attr.sample_freq);
 		}
 
 		if (!event->attr.freq || !event->attr.sample_freq)
@@ -6421,14 +6452,6 @@ static void __perf_event_period(struct perf_event *event,
 	active = (event->state == PERF_EVENT_STATE_ACTIVE);
 	if (active) {
 		perf_pmu_disable(event->pmu);
-		/*
-		 * We could be throttled; unthrottle now to avoid the tick
-		 * trying to unthrottle while we already re-started the event.
-		 */
-		if (event->hw.interrupts == MAX_INTERRUPTS) {
-			event->hw.interrupts = 0;
-			perf_log_throttle(event, 1);
-		}
 		event->pmu->stop(event, PERF_EF_UPDATE);
 	}
 
@@ -6436,6 +6459,14 @@ static void __perf_event_period(struct perf_event *event,
 
 	if (active) {
 		event->pmu->start(event, PERF_EF_RELOAD);
+		/*
+		 * Once the period is force-reset, the event starts immediately.
+		 * But the event/group could be throttled. Unthrottle the
+		 * event/group now to avoid the next tick trying to unthrottle
+		 * while we already re-started the event/group.
+		 */
+		if (event->hw.interrupts == MAX_INTERRUPTS)
+			perf_event_unthrottle_group(event, false);
 		perf_pmu_enable(event->pmu);
 	}
 }
@@ -10326,8 +10357,7 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
 	if (unlikely(throttle && hwc->interrupts >= max_samples_per_tick)) {
 		__this_cpu_inc(perf_throttled_count);
 		tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
-		hwc->interrupts = MAX_INTERRUPTS;
-		perf_log_throttle(event, 0);
+		perf_event_throttle_group(event);
 		ret = 1;
 	}
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 02/15] perf/x86/intel: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
  2025-05-14 15:13 ` [PATCH V2 01/15] perf: Fix the throttle logic for a group kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:13 ` [PATCH V2 03/15] perf/x86/amd: " kan.liang
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 arch/x86/events/core.c       | 3 +--
 arch/x86/events/intel/core.c | 6 ++----
 arch/x86/events/intel/ds.c   | 7 +++----
 arch/x86/events/intel/knc.c  | 3 +--
 arch/x86/events/intel/p4.c   | 3 +--
 5 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 186e31cd0c14..8a2f73333a50 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1730,8 +1730,7 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
 
 		perf_sample_save_brstack(&data, event, &cpuc->lbr_stack, NULL);
 
-		if (perf_event_overflow(event, &data, regs))
-			x86_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	if (handled)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index b7562d66c6ea..a8309a67693e 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3138,8 +3138,7 @@ static void x86_pmu_handle_guest_pebs(struct pt_regs *regs,
 			continue;
 
 		perf_sample_data_init(data, 0, event->hw.last_period);
-		if (perf_event_overflow(event, data, regs))
-			x86_pmu_stop(event, 0);
+		perf_event_overflow(event, data, regs);
 
 		/* Inject one fake event is enough. */
 		break;
@@ -3282,8 +3281,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
 		if (has_branch_stack(event))
 			intel_pmu_lbr_save_brstack(&data, cpuc, event);
 
-		if (perf_event_overflow(event, &data, regs))
-			x86_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	return handled;
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 58c054fa56b5..f8610f7196f0 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2368,8 +2368,7 @@ __intel_pmu_pebs_last_event(struct perf_event *event,
 		 * All but the last records are processed.
 		 * The last one is left to be able to call the overflow handler.
 		 */
-		if (perf_event_overflow(event, data, regs))
-			x86_pmu_stop(event, 0);
+		perf_event_overflow(event, data, regs);
 	}
 
 	if (hwc->flags & PERF_X86_EVENT_AUTO_RELOAD) {
@@ -2597,8 +2596,8 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
 		if (error[bit]) {
 			perf_log_lost_samples(event, error[bit]);
 
-			if (iregs && perf_event_account_interrupt(event))
-				x86_pmu_stop(event, 0);
+			if (iregs)
+				perf_event_account_interrupt(event);
 		}
 
 		if (counts[bit]) {
diff --git a/arch/x86/events/intel/knc.c b/arch/x86/events/intel/knc.c
index 3e8ec049b46d..384589168c1a 100644
--- a/arch/x86/events/intel/knc.c
+++ b/arch/x86/events/intel/knc.c
@@ -254,8 +254,7 @@ static int knc_pmu_handle_irq(struct pt_regs *regs)
 
 		perf_sample_data_init(&data, 0, last_period);
 
-		if (perf_event_overflow(event, &data, regs))
-			x86_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	/*
diff --git a/arch/x86/events/intel/p4.c b/arch/x86/events/intel/p4.c
index c85a9fc44355..126d5ae264cb 100644
--- a/arch/x86/events/intel/p4.c
+++ b/arch/x86/events/intel/p4.c
@@ -1072,8 +1072,7 @@ static int p4_pmu_handle_irq(struct pt_regs *regs)
 			continue;
 
 
-		if (perf_event_overflow(event, &data, regs))
-			x86_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	if (handled)
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 03/15] perf/x86/amd: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
  2025-05-14 15:13 ` [PATCH V2 01/15] perf: Fix the throttle logic for a group kan.liang
  2025-05-14 15:13 ` [PATCH V2 02/15] perf/x86/intel: Remove driver-specific throttle support kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:13 ` [PATCH V2 04/15] perf/x86/zhaoxin: " kan.liang
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Ravi Bangoria, Sandipan Das

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Sandipan Das <sandipan.das@amd.com>
Cc: Ravi Bangoria <ravi.bangoria@amd.com>
---

Changes since V1:
- Add Reviewed-by from Ravi

 arch/x86/events/amd/core.c | 3 +--
 arch/x86/events/amd/ibs.c  | 4 +---
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 30d6ceb4c8ad..5e64283b9bf2 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -1003,8 +1003,7 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
 
 		perf_sample_save_brstack(&data, event, &cpuc->lbr_stack, NULL);
 
-		if (perf_event_overflow(event, &data, regs))
-			x86_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	/*
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
index 0252b7ea8bca..4bbbca02aeb1 100644
--- a/arch/x86/events/amd/ibs.c
+++ b/arch/x86/events/amd/ibs.c
@@ -1373,9 +1373,7 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs)
 		hwc->sample_period = perf_ibs->min_period;
 
 out:
-	if (throttle) {
-		perf_ibs_stop(event, 0);
-	} else {
+	if (!throttle) {
 		if (perf_ibs == &perf_ibs_op) {
 			if (ibs_caps & IBS_CAPS_OPCNTEXT) {
 				new_config = period & IBS_OP_MAX_CNT_EXT_MASK;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 04/15] perf/x86/zhaoxin: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (2 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 03/15] perf/x86/amd: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:13 ` [PATCH V2 05/15] powerpc/perf: " kan.liang
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, silviazhao, CodyYao-oc

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: silviazhao <silviazhao-oc@zhaoxin.com>
Cc: CodyYao-oc <CodyYao-oc@zhaoxin.com>
---
 arch/x86/events/zhaoxin/core.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/events/zhaoxin/core.c b/arch/x86/events/zhaoxin/core.c
index 2fd9b0cf9a5e..49a5944fac63 100644
--- a/arch/x86/events/zhaoxin/core.c
+++ b/arch/x86/events/zhaoxin/core.c
@@ -397,8 +397,7 @@ static int zhaoxin_pmu_handle_irq(struct pt_regs *regs)
 		if (!x86_perf_event_set_period(event))
 			continue;
 
-		if (perf_event_overflow(event, &data, regs))
-			x86_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	/*
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 05/15] powerpc/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (3 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 04/15] perf/x86/zhaoxin: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:13 ` [PATCH V2 06/15] s390/perf: " kan.liang
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Athira Rajeev,
	Madhavan Srinivasan, linuxppc-dev

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
---
 arch/powerpc/perf/core-book3s.c  | 6 ++----
 arch/powerpc/perf/core-fsl-emb.c | 3 +--
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 42ff4d167acc..8b0081441f85 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -2344,12 +2344,10 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
 			ppmu->get_mem_weight(&data.weight.full, event->attr.sample_type);
 			data.sample_flags |= PERF_SAMPLE_WEIGHT_TYPE;
 		}
-		if (perf_event_overflow(event, &data, regs))
-			power_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	} else if (period) {
 		/* Account for interrupt in case of invalid SIAR */
-		if (perf_event_account_interrupt(event))
-			power_pmu_stop(event, 0);
+		perf_event_account_interrupt(event);
 	}
 }
 
diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
index d2ffcc7021c5..7120ab20cbfe 100644
--- a/arch/powerpc/perf/core-fsl-emb.c
+++ b/arch/powerpc/perf/core-fsl-emb.c
@@ -635,8 +635,7 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
 
 		perf_sample_data_init(&data, 0, last_period);
 
-		if (perf_event_overflow(event, &data, regs))
-			fsl_emb_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	}
 }
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 06/15] s390/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (4 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 05/15] powerpc/perf: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-15 13:15   ` Thomas Richter
  2025-05-14 15:13 ` [PATCH V2 07/15] perf/arm: " kan.liang
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, linux-s390

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Thomas Richter <tmricht@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
---
 arch/s390/kernel/perf_cpum_cf.c | 2 --
 arch/s390/kernel/perf_cpum_sf.c | 5 +----
 2 files changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
index e657fad7e376..6a262e198e35 100644
--- a/arch/s390/kernel/perf_cpum_cf.c
+++ b/arch/s390/kernel/perf_cpum_cf.c
@@ -980,8 +980,6 @@ static int cfdiag_push_sample(struct perf_event *event,
 	}
 
 	overflow = perf_event_overflow(event, &data, &regs);
-	if (overflow)
-		event->pmu->stop(event, 0);
 
 	perf_event_update_userpage(event);
 	return overflow;
diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
index ad22799d8a7d..91469401f2c9 100644
--- a/arch/s390/kernel/perf_cpum_sf.c
+++ b/arch/s390/kernel/perf_cpum_sf.c
@@ -1072,10 +1072,7 @@ static int perf_push_sample(struct perf_event *event,
 	overflow = 0;
 	if (perf_event_exclude(event, &regs, sde_regs))
 		goto out;
-	if (perf_event_overflow(event, &data, &regs)) {
-		overflow = 1;
-		event->pmu->stop(event, 0);
-	}
+	overflow = perf_event_overflow(event, &data, &regs);
 	perf_event_update_userpage(event);
 out:
 	return overflow;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 07/15] perf/arm: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (5 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 06/15] s390/perf: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-16 13:24   ` Leo Yan
  2025-05-14 15:13 ` [PATCH V2 08/15] perf/apple_m1: " kan.liang
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Rob Herring,
	Vincenzo Frascino, Will Deacon

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Rob Herring (Arm) <robh@kernel.org>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 drivers/perf/arm_pmuv3.c      | 3 +--
 drivers/perf/arm_v6_pmu.c     | 3 +--
 drivers/perf/arm_v7_pmu.c     | 3 +--
 drivers/perf/arm_xscale_pmu.c | 6 ++----
 4 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
index e506d59654e7..3db9f4ed17e8 100644
--- a/drivers/perf/arm_pmuv3.c
+++ b/drivers/perf/arm_pmuv3.c
@@ -887,8 +887,7 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		 * an irq_work which will be taken care of in the handling of
 		 * IPI_IRQ_WORK.
 		 */
-		if (perf_event_overflow(event, &data, regs))
-			cpu_pmu->disable(event);
+		perf_event_overflow(event, &data, regs);
 	}
 	armv8pmu_start(cpu_pmu);
 
diff --git a/drivers/perf/arm_v6_pmu.c b/drivers/perf/arm_v6_pmu.c
index b09615bb2bb2..7cb12c8e06c7 100644
--- a/drivers/perf/arm_v6_pmu.c
+++ b/drivers/perf/arm_v6_pmu.c
@@ -276,8 +276,7 @@ armv6pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
-		if (perf_event_overflow(event, &data, regs))
-			cpu_pmu->disable(event);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	/*
diff --git a/drivers/perf/arm_v7_pmu.c b/drivers/perf/arm_v7_pmu.c
index 17831e1920bd..a1e438101114 100644
--- a/drivers/perf/arm_v7_pmu.c
+++ b/drivers/perf/arm_v7_pmu.c
@@ -930,8 +930,7 @@ static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
-		if (perf_event_overflow(event, &data, regs))
-			cpu_pmu->disable(event);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	/*
diff --git a/drivers/perf/arm_xscale_pmu.c b/drivers/perf/arm_xscale_pmu.c
index 638fea9b1263..c2ac41dd9e19 100644
--- a/drivers/perf/arm_xscale_pmu.c
+++ b/drivers/perf/arm_xscale_pmu.c
@@ -186,8 +186,7 @@ xscale1pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
-		if (perf_event_overflow(event, &data, regs))
-			cpu_pmu->disable(event);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	irq_work_run();
@@ -519,8 +518,7 @@ xscale2pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
-		if (perf_event_overflow(event, &data, regs))
-			cpu_pmu->disable(event);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	irq_work_run();
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 08/15] perf/apple_m1: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (6 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 07/15] perf/arm: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:13 ` [PATCH V2 09/15] alpha/perf: " kan.liang
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Oliver Upton

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Oliver Upton <oliver.upton@linux.dev>
---
 drivers/perf/apple_m1_cpu_pmu.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/perf/apple_m1_cpu_pmu.c b/drivers/perf/apple_m1_cpu_pmu.c
index df9a28ba69dc..81b6f1a62349 100644
--- a/drivers/perf/apple_m1_cpu_pmu.c
+++ b/drivers/perf/apple_m1_cpu_pmu.c
@@ -474,8 +474,7 @@ static irqreturn_t m1_pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
-		if (perf_event_overflow(event, &data, regs))
-			m1_pmu_disable_event(event);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	cpu_pmu->start(cpu_pmu);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 09/15] alpha/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (7 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 08/15] perf/apple_m1: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:13 ` [PATCH V2 10/15] arc/perf: " kan.liang
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, linux-alpha

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: linux-alpha@vger.kernel.org
---
 arch/alpha/kernel/perf_event.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/arch/alpha/kernel/perf_event.c b/arch/alpha/kernel/perf_event.c
index 1f0eb4f25c0f..a3eaab094ece 100644
--- a/arch/alpha/kernel/perf_event.c
+++ b/arch/alpha/kernel/perf_event.c
@@ -852,14 +852,9 @@ static void alpha_perf_event_irq_handler(unsigned long la_ptr,
 	alpha_perf_event_update(event, hwc, idx, alpha_pmu->pmc_max_period[idx]+1);
 	perf_sample_data_init(&data, 0, hwc->last_period);
 
-	if (alpha_perf_event_set_period(event, hwc, idx)) {
-		if (perf_event_overflow(event, &data, regs)) {
-			/* Interrupts coming too quickly; "throttle" the
-			 * counter, i.e., disable it for a little while.
-			 */
-			alpha_pmu_stop(event, 0);
-		}
-	}
+	if (alpha_perf_event_set_period(event, hwc, idx))
+		perf_event_overflow(event, &data, regs);
+
 	wrperfmon(PERFMON_CMD_ENABLE, cpuc->idx_mask);
 
 	return;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 10/15] arc/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (8 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 09/15] alpha/perf: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:13 ` [PATCH V2 11/15] csky/perf: " kan.liang
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Vineet Gupta, linux-snps-arc

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Vineet Gupta <vgupta@ikernel.org>
Cc: linux-snps-arc@lists.infradead.org
---
 arch/arc/kernel/perf_event.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arc/kernel/perf_event.c b/arch/arc/kernel/perf_event.c
index 6e5a651cd75c..ed6d4f0cd621 100644
--- a/arch/arc/kernel/perf_event.c
+++ b/arch/arc/kernel/perf_event.c
@@ -599,10 +599,8 @@ static irqreturn_t arc_pmu_intr(int irq, void *dev)
 
 		arc_perf_event_update(event, &event->hw, event->hw.idx);
 		perf_sample_data_init(&data, 0, hwc->last_period);
-		if (arc_pmu_event_set_period(event)) {
-			if (perf_event_overflow(event, &data, regs))
-				arc_pmu_stop(event, 0);
-		}
+		if (arc_pmu_event_set_period(event))
+			perf_event_overflow(event, &data, regs);
 
 		active_ints &= ~BIT(idx);
 	} while (active_ints);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 11/15] csky/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (9 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 10/15] arc/perf: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-15  6:34   ` Guo Ren
  2025-05-14 15:13 ` [PATCH V2 12/15] loongarch/perf: " kan.liang
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Mao Han, Guo Ren, linux-csky

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Mao Han <han_mao@c-sky.com>
Cc: Guo Ren <ren_guo@c-sky.com>
Cc: linux-csky@vger.kernel.org
---
 arch/csky/kernel/perf_event.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/csky/kernel/perf_event.c b/arch/csky/kernel/perf_event.c
index e5f18420ce64..e0a36acd265b 100644
--- a/arch/csky/kernel/perf_event.c
+++ b/arch/csky/kernel/perf_event.c
@@ -1139,8 +1139,7 @@ static irqreturn_t csky_pmu_handle_irq(int irq_num, void *dev)
 		perf_sample_data_init(&data, 0, hwc->last_period);
 		csky_pmu_event_set_period(event);
 
-		if (perf_event_overflow(event, &data, regs))
-			csky_pmu_stop_event(event);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	csky_pmu_enable(&csky_pmu.pmu);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 12/15] loongarch/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (10 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 11/15] csky/perf: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:13 ` [PATCH V2 13/15] sparc/perf: " kan.liang
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Bibo Mao, Huacai Chen,
	loongarch

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Bibo Mao <maobibo@loongson.cn>
Cc: Huacai Chen <chenhuacai@loongson.cn>
Cc: loongarch@lists.linux.dev
---
 arch/loongarch/kernel/perf_event.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/loongarch/kernel/perf_event.c b/arch/loongarch/kernel/perf_event.c
index f86a4b838dd7..8ad098703488 100644
--- a/arch/loongarch/kernel/perf_event.c
+++ b/arch/loongarch/kernel/perf_event.c
@@ -479,8 +479,7 @@ static void handle_associated_event(struct cpu_hw_events *cpuc, int idx,
 	if (!loongarch_pmu_event_set_period(event, hwc, idx))
 		return;
 
-	if (perf_event_overflow(event, data, regs))
-		loongarch_pmu_disable_event(idx);
+	perf_event_overflow(event, data, regs);
 }
 
 static irqreturn_t pmu_handle_irq(int irq, void *dev)
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 13/15] sparc/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (11 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 12/15] loongarch/perf: " kan.liang
@ 2025-05-14 15:13 ` kan.liang
  2025-05-14 15:14 ` [PATCH V2 14/15] xtensa/perf: " kan.liang
  2025-05-14 15:14 ` [PATCH V2 15/15] mips/perf: " kan.liang
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:13 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, David S . Miller, sparclinux

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
---
 arch/sparc/kernel/perf_event.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
index f02a283a8e8f..cae4d33002a5 100644
--- a/arch/sparc/kernel/perf_event.c
+++ b/arch/sparc/kernel/perf_event.c
@@ -1668,8 +1668,7 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
 		if (!sparc_perf_event_set_period(event, hwc, idx))
 			continue;
 
-		if (perf_event_overflow(event, &data, regs))
-			sparc_pmu_stop(event, 0);
+		perf_event_overflow(event, &data, regs);
 	}
 
 	finish_clock = sched_clock();
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 14/15] xtensa/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (12 preceding siblings ...)
  2025-05-14 15:13 ` [PATCH V2 13/15] sparc/perf: " kan.liang
@ 2025-05-14 15:14 ` kan.liang
  2025-05-14 15:14 ` [PATCH V2 15/15] mips/perf: " kan.liang
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:14 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Max Filippov

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
---

Changes since V1:
- Add Reviewed-by from Max

 arch/xtensa/kernel/perf_event.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c
index 183618090d05..223f1d452310 100644
--- a/arch/xtensa/kernel/perf_event.c
+++ b/arch/xtensa/kernel/perf_event.c
@@ -388,8 +388,7 @@ irqreturn_t xtensa_pmu_irq_handler(int irq, void *dev_id)
 			struct pt_regs *regs = get_irq_regs();
 
 			perf_sample_data_init(&data, 0, last_period);
-			if (perf_event_overflow(event, &data, regs))
-				xtensa_pmu_stop(event, 0);
+			perf_event_overflow(event, &data, regs);
 		}
 
 		rc = IRQ_HANDLED;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V2 15/15] mips/perf: Remove driver-specific throttle support
  2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
                   ` (13 preceding siblings ...)
  2025-05-14 15:14 ` [PATCH V2 14/15] xtensa/perf: " kan.liang
@ 2025-05-14 15:14 ` kan.liang
  14 siblings, 0 replies; 26+ messages in thread
From: kan.liang @ 2025-05-14 15:14 UTC (permalink / raw)
  To: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users
  Cc: eranian, ctshao, tmricht, Kan Liang, Thomas Bogendoerfer,
	linux-mips

From: Kan Liang <kan.liang@linux.intel.com>

The throttle support has been added in the generic code. Remove
the driver-specific throttle support.

Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: linux-mips@vger.kernel.org
---
 arch/mips/kernel/perf_event_mipsxx.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c
index c4d6b09136b1..196a070349b0 100644
--- a/arch/mips/kernel/perf_event_mipsxx.c
+++ b/arch/mips/kernel/perf_event_mipsxx.c
@@ -791,8 +791,7 @@ static void handle_associated_event(struct cpu_hw_events *cpuc,
 	if (!mipspmu_event_set_period(event, hwc, idx))
 		return;
 
-	if (perf_event_overflow(event, data, regs))
-		mipsxx_pmu_disable_event(idx);
+	perf_event_overflow(event, data, regs);
 }
 
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 11/15] csky/perf: Remove driver-specific throttle support
  2025-05-14 15:13 ` [PATCH V2 11/15] csky/perf: " kan.liang
@ 2025-05-15  6:34   ` Guo Ren
  0 siblings, 0 replies; 26+ messages in thread
From: Guo Ren @ 2025-05-15  6:34 UTC (permalink / raw)
  To: kan.liang
  Cc: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users, eranian, ctshao, tmricht, Mao Han, Guo Ren,
	linux-csky

On Wed, May 14, 2025 at 6:49 PM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The throttle support has been added in the generic code. Remove
> the driver-specific throttle support.
Acked-by: Guo Ren <guoren@kernel.org>

>
> Besides the throttle, perf_event_overflow may return true because of
> event_limit. It already does an inatomic event disable. The pmu->stop
> is not required either.
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Cc: Mao Han <han_mao@c-sky.com>
> Cc: Guo Ren <ren_guo@c-sky.com>
> Cc: linux-csky@vger.kernel.org
> ---
>  arch/csky/kernel/perf_event.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/csky/kernel/perf_event.c b/arch/csky/kernel/perf_event.c
> index e5f18420ce64..e0a36acd265b 100644
> --- a/arch/csky/kernel/perf_event.c
> +++ b/arch/csky/kernel/perf_event.c
> @@ -1139,8 +1139,7 @@ static irqreturn_t csky_pmu_handle_irq(int irq_num, void *dev)
>                 perf_sample_data_init(&data, 0, hwc->last_period);
>                 csky_pmu_event_set_period(event);
>
> -               if (perf_event_overflow(event, &data, regs))
> -                       csky_pmu_stop_event(event);
> +               perf_event_overflow(event, &data, regs);
>         }
>
>         csky_pmu_enable(&csky_pmu.pmu);
> --
> 2.38.1
>
>


-- 
Best Regards
 Guo Ren

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 01/15] perf: Fix the throttle logic for a group
  2025-05-14 15:13 ` [PATCH V2 01/15] perf: Fix the throttle logic for a group kan.liang
@ 2025-05-15  9:43   ` Leo Yan
  2025-05-15 12:55     ` Liang, Kan
  0 siblings, 1 reply; 26+ messages in thread
From: Leo Yan @ 2025-05-15  9:43 UTC (permalink / raw)
  To: kan.liang
  Cc: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users, eranian, ctshao, tmricht

On Wed, May 14, 2025 at 08:13:47AM -0700, kan.liang@linux.intel.com wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
> 
> The current throttle logic doesn't work well with a group, e.g., the
> following sampling-read case.
> 
> $ perf record -e "{cycles,cycles}:S" ...
> 
> $ perf report -D | grep THROTTLE | tail -2
>             THROTTLE events:        426  ( 9.0%)
>           UNTHROTTLE events:        425  ( 9.0%)
> 
> $ perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
> 0 1020120874009167 0x74970 [0x68]: PERF_RECORD_SAMPLE(IP, 0x1):
> ... sample_read:
> .... group nr 2
> ..... id 0000000000000327, value 000000000cbb993a, lost 0
> ..... id 0000000000000328, value 00000002211c26df, lost 0
> 
> The second cycles event has a much larger value than the first cycles
> event in the same group.
> 
> The current throttle logic in the generic code only logs the THROTTLE
> event. It relies on the specific driver implementation to disable
> events. For all ARCHs, the implementation is similar. Only the event is
> disabled, rather than the group.
> 
> The logic to disable the group should be generic for all ARCHs. Add the
> logic in the generic code. The following patch will remove the buggy
> driver-specific implementation.
> 
> The throttle only happens when an event is overflowed. Stop the entire
> group when any event in the group triggers the throttle.
> The MAX_INTERRUPTS is set to all throttle events.
> 
> The unthrottled could happen in 3 places.
> - event/group sched. All events in the group are scheduled one by one.
>   All of them will be unthrottled eventually. Nothing needs to be
>   changed.
> - The perf_adjust_freq_unthr_events for each tick. Needs to restart the
>   group altogether.
> - The __perf_event_period(). The whole group needs to be restarted
>   altogether as well.
> 
> With the fix,
> $ sudo perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
> 0 3573470770332 0x12f5f8 [0x70]: PERF_RECORD_SAMPLE(IP, 0x2):
> ... sample_read:
> .... group nr 2
> ..... id 0000000000000a28, value 00000004fd3dfd8f, lost 0
> ..... id 0000000000000a29, value 00000004fd3dfd8f, lost 0
> 
> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> ---
> 
> Changes since V1:
> - Apply the suggested throttle/unthrottle functions from Peter.
>   The MAX_INTERRUPTS and throttle logs are applied to all events.
> - Update the description and comments accordingly
> 
>  kernel/events/core.c | 58 +++++++++++++++++++++++++++++++++-----------
>  1 file changed, 44 insertions(+), 14 deletions(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index a84abc2b7f20..a270fcda766d 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2734,6 +2734,39 @@ void perf_event_disable_inatomic(struct perf_event *event)
>  static void perf_log_throttle(struct perf_event *event, int enable);
>  static void perf_log_itrace_start(struct perf_event *event);
>  
> +static void perf_event_unthrottle(struct perf_event *event, bool start)
> +{
> +	event->hw.interrupts = 0;
> +	if (start)
> +		event->pmu->start(event, 0);
> +	perf_log_throttle(event, 1);
> +}
> +
> +static void perf_event_throttle(struct perf_event *event)
> +{
> +	event->pmu->stop(event, 0);
> +	event->hw.interrupts = MAX_INTERRUPTS;
> +	perf_log_throttle(event, 0);
> +}
> +
> +static void perf_event_unthrottle_group(struct perf_event *event, bool start)
> +{
> +	struct perf_event *sibling, *leader = event->group_leader;
> +
> +	perf_event_unthrottle(leader, leader != event || start);
> +	for_each_sibling_event(sibling, leader)
> +		perf_event_unthrottle(sibling, sibling != event || start);

Seems to me that the condition "leader != event || start" is bit tricky
(similarly for the check "sibling != event || start").

If a session sets the frequency (with option -F in perf tool), the
following flow is triggered:

  perf_adjust_freq_unthr_events()
    `> perf_event_unthrottle_group(event, false);

The argument "start" is false, so all sibling events will be enabled,
but the event pointed by the "event" argument remains disabled.  Though
the __perf_event_period() function will enables all events with adjusted
period, but it is still risky for counting discrepancy caused by the
flow described above.

Thanks,
Leo

> +}
> +
> +static void perf_event_throttle_group(struct perf_event *event)
> +{
> +	struct perf_event *sibling, *leader = event->group_leader;
> +
> +	perf_event_throttle(leader);
> +	for_each_sibling_event(sibling, leader)
> +		perf_event_throttle(sibling);
> +}
> +
>  static int
>  event_sched_in(struct perf_event *event, struct perf_event_context *ctx)
>  {
> @@ -4389,10 +4422,8 @@ static void perf_adjust_freq_unthr_events(struct list_head *event_list)
>  		hwc = &event->hw;
>  
>  		if (hwc->interrupts == MAX_INTERRUPTS) {
> -			hwc->interrupts = 0;
> -			perf_log_throttle(event, 1);
> -			if (!event->attr.freq || !event->attr.sample_freq)
> -				event->pmu->start(event, 0);
> +			perf_event_unthrottle_group(event,
> +				!event->attr.freq || !event->attr.sample_freq);
>  		}
>  
>  		if (!event->attr.freq || !event->attr.sample_freq)
> @@ -6421,14 +6452,6 @@ static void __perf_event_period(struct perf_event *event,
>  	active = (event->state == PERF_EVENT_STATE_ACTIVE);
>  	if (active) {
>  		perf_pmu_disable(event->pmu);
> -		/*
> -		 * We could be throttled; unthrottle now to avoid the tick
> -		 * trying to unthrottle while we already re-started the event.
> -		 */
> -		if (event->hw.interrupts == MAX_INTERRUPTS) {
> -			event->hw.interrupts = 0;
> -			perf_log_throttle(event, 1);
> -		}
>  		event->pmu->stop(event, PERF_EF_UPDATE);
>  	}
>  
> @@ -6436,6 +6459,14 @@ static void __perf_event_period(struct perf_event *event,
>  
>  	if (active) {
>  		event->pmu->start(event, PERF_EF_RELOAD);
> +		/*
> +		 * Once the period is force-reset, the event starts immediately.
> +		 * But the event/group could be throttled. Unthrottle the
> +		 * event/group now to avoid the next tick trying to unthrottle
> +		 * while we already re-started the event/group.
> +		 */
> +		if (event->hw.interrupts == MAX_INTERRUPTS)
> +			perf_event_unthrottle_group(event, false);
>  		perf_pmu_enable(event->pmu);
>  	}
>  }
> @@ -10326,8 +10357,7 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
>  	if (unlikely(throttle && hwc->interrupts >= max_samples_per_tick)) {
>  		__this_cpu_inc(perf_throttled_count);
>  		tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
> -		hwc->interrupts = MAX_INTERRUPTS;
> -		perf_log_throttle(event, 0);
> +		perf_event_throttle_group(event);
>  		ret = 1;
>  	}
>  
> -- 
> 2.38.1
> 
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 01/15] perf: Fix the throttle logic for a group
  2025-05-15  9:43   ` Leo Yan
@ 2025-05-15 12:55     ` Liang, Kan
  2025-05-16 12:51       ` Leo Yan
  0 siblings, 1 reply; 26+ messages in thread
From: Liang, Kan @ 2025-05-15 12:55 UTC (permalink / raw)
  To: Leo Yan
  Cc: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users, eranian, ctshao, tmricht



On 2025-05-15 5:43 a.m., Leo Yan wrote:
> On Wed, May 14, 2025 at 08:13:47AM -0700, kan.liang@linux.intel.com wrote:
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> The current throttle logic doesn't work well with a group, e.g., the
>> following sampling-read case.
>>
>> $ perf record -e "{cycles,cycles}:S" ...
>>
>> $ perf report -D | grep THROTTLE | tail -2
>>             THROTTLE events:        426  ( 9.0%)
>>           UNTHROTTLE events:        425  ( 9.0%)
>>
>> $ perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
>> 0 1020120874009167 0x74970 [0x68]: PERF_RECORD_SAMPLE(IP, 0x1):
>> ... sample_read:
>> .... group nr 2
>> ..... id 0000000000000327, value 000000000cbb993a, lost 0
>> ..... id 0000000000000328, value 00000002211c26df, lost 0
>>
>> The second cycles event has a much larger value than the first cycles
>> event in the same group.
>>
>> The current throttle logic in the generic code only logs the THROTTLE
>> event. It relies on the specific driver implementation to disable
>> events. For all ARCHs, the implementation is similar. Only the event is
>> disabled, rather than the group.
>>
>> The logic to disable the group should be generic for all ARCHs. Add the
>> logic in the generic code. The following patch will remove the buggy
>> driver-specific implementation.
>>
>> The throttle only happens when an event is overflowed. Stop the entire
>> group when any event in the group triggers the throttle.
>> The MAX_INTERRUPTS is set to all throttle events.
>>
>> The unthrottled could happen in 3 places.
>> - event/group sched. All events in the group are scheduled one by one.
>>   All of them will be unthrottled eventually. Nothing needs to be
>>   changed.
>> - The perf_adjust_freq_unthr_events for each tick. Needs to restart the
>>   group altogether.
>> - The __perf_event_period(). The whole group needs to be restarted
>>   altogether as well.
>>
>> With the fix,
>> $ sudo perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
>> 0 3573470770332 0x12f5f8 [0x70]: PERF_RECORD_SAMPLE(IP, 0x2):
>> ... sample_read:
>> .... group nr 2
>> ..... id 0000000000000a28, value 00000004fd3dfd8f, lost 0
>> ..... id 0000000000000a29, value 00000004fd3dfd8f, lost 0
>>
>> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
>> ---
>>
>> Changes since V1:
>> - Apply the suggested throttle/unthrottle functions from Peter.
>>   The MAX_INTERRUPTS and throttle logs are applied to all events.
>> - Update the description and comments accordingly
>>
>>  kernel/events/core.c | 58 +++++++++++++++++++++++++++++++++-----------
>>  1 file changed, 44 insertions(+), 14 deletions(-)
>>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index a84abc2b7f20..a270fcda766d 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -2734,6 +2734,39 @@ void perf_event_disable_inatomic(struct perf_event *event)
>>  static void perf_log_throttle(struct perf_event *event, int enable);
>>  static void perf_log_itrace_start(struct perf_event *event);
>>  
>> +static void perf_event_unthrottle(struct perf_event *event, bool start)
>> +{
>> +	event->hw.interrupts = 0;
>> +	if (start)
>> +		event->pmu->start(event, 0);
>> +	perf_log_throttle(event, 1);
>> +}
>> +
>> +static void perf_event_throttle(struct perf_event *event)
>> +{
>> +	event->pmu->stop(event, 0);
>> +	event->hw.interrupts = MAX_INTERRUPTS;
>> +	perf_log_throttle(event, 0);
>> +}
>> +
>> +static void perf_event_unthrottle_group(struct perf_event *event, bool start)
>> +{
>> +	struct perf_event *sibling, *leader = event->group_leader;
>> +
>> +	perf_event_unthrottle(leader, leader != event || start);
>> +	for_each_sibling_event(sibling, leader)
>> +		perf_event_unthrottle(sibling, sibling != event || start);
> 
> Seems to me that the condition "leader != event || start" is bit tricky
> (similarly for the check "sibling != event || start").
> 
> If a session sets the frequency (with option -F in perf tool), the
> following flow is triggered:
> 
>   perf_adjust_freq_unthr_events()
>     `> perf_event_unthrottle_group(event, false);
> 
> The argument "start" is false, so all sibling events will be enabled,
> but the event pointed by the "event" argument remains disabled.  

Right. Because the following code will adjust the period of the event
and start it.
The PMU is disabled at the moment. There is no difference in starting
the leader first or the member first.

> Though
> the __perf_event_period() function will enables all events with adjusted
> period, but it is still risky for counting discrepancy caused by the
> flow described above.

The __perf_event_period() is similar. The event in both cases has to
adjust the period before re-start the event, which has to be done
outside of the perf_event_unthrottle_group().

Thanks,
Kan>
> Thanks,
> Leo
> 
>> +}
>> +
>> +static void perf_event_throttle_group(struct perf_event *event)
>> +{
>> +	struct perf_event *sibling, *leader = event->group_leader;
>> +
>> +	perf_event_throttle(leader);
>> +	for_each_sibling_event(sibling, leader)
>> +		perf_event_throttle(sibling);
>> +}
>> +
>>  static int
>>  event_sched_in(struct perf_event *event, struct perf_event_context *ctx)
>>  {
>> @@ -4389,10 +4422,8 @@ static void perf_adjust_freq_unthr_events(struct list_head *event_list)
>>  		hwc = &event->hw;
>>  
>>  		if (hwc->interrupts == MAX_INTERRUPTS) {
>> -			hwc->interrupts = 0;
>> -			perf_log_throttle(event, 1);
>> -			if (!event->attr.freq || !event->attr.sample_freq)
>> -				event->pmu->start(event, 0);
>> +			perf_event_unthrottle_group(event,
>> +				!event->attr.freq || !event->attr.sample_freq);
>>  		}
>>  
>>  		if (!event->attr.freq || !event->attr.sample_freq)
>> @@ -6421,14 +6452,6 @@ static void __perf_event_period(struct perf_event *event,
>>  	active = (event->state == PERF_EVENT_STATE_ACTIVE);
>>  	if (active) {
>>  		perf_pmu_disable(event->pmu);
>> -		/*
>> -		 * We could be throttled; unthrottle now to avoid the tick
>> -		 * trying to unthrottle while we already re-started the event.
>> -		 */
>> -		if (event->hw.interrupts == MAX_INTERRUPTS) {
>> -			event->hw.interrupts = 0;
>> -			perf_log_throttle(event, 1);
>> -		}
>>  		event->pmu->stop(event, PERF_EF_UPDATE);
>>  	}
>>  
>> @@ -6436,6 +6459,14 @@ static void __perf_event_period(struct perf_event *event,
>>  
>>  	if (active) {
>>  		event->pmu->start(event, PERF_EF_RELOAD);
>> +		/*
>> +		 * Once the period is force-reset, the event starts immediately.
>> +		 * But the event/group could be throttled. Unthrottle the
>> +		 * event/group now to avoid the next tick trying to unthrottle
>> +		 * while we already re-started the event/group.
>> +		 */
>> +		if (event->hw.interrupts == MAX_INTERRUPTS)
>> +			perf_event_unthrottle_group(event, false);
>>  		perf_pmu_enable(event->pmu);
>>  	}
>>  }
>> @@ -10326,8 +10357,7 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
>>  	if (unlikely(throttle && hwc->interrupts >= max_samples_per_tick)) {
>>  		__this_cpu_inc(perf_throttled_count);
>>  		tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
>> -		hwc->interrupts = MAX_INTERRUPTS;
>> -		perf_log_throttle(event, 0);
>> +		perf_event_throttle_group(event);
>>  		ret = 1;
>>  	}
>>  
>> -- 
>> 2.38.1
>>
>>
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 06/15] s390/perf: Remove driver-specific throttle support
  2025-05-14 15:13 ` [PATCH V2 06/15] s390/perf: " kan.liang
@ 2025-05-15 13:15   ` Thomas Richter
  2025-05-15 13:56     ` Liang, Kan
  0 siblings, 1 reply; 26+ messages in thread
From: Thomas Richter @ 2025-05-15 13:15 UTC (permalink / raw)
  To: kan.liang, peterz, mingo, namhyung, irogers, mark.rutland,
	linux-kernel, linux-perf-users
  Cc: eranian, ctshao, linux-s390

On 5/14/25 17:13, kan.liang@linux.intel.com wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
> 
> The throttle support has been added in the generic code. Remove
> the driver-specific throttle support.
> 
> Besides the throttle, perf_event_overflow may return true because of
> event_limit. It already does an inatomic event disable. The pmu->stop
> is not required either.
> 
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Cc: Thomas Richter <tmricht@linux.ibm.com>
> Cc: linux-s390@vger.kernel.org
> ---
>  arch/s390/kernel/perf_cpum_cf.c | 2 --
>  arch/s390/kernel/perf_cpum_sf.c | 5 +----
>  2 files changed, 1 insertion(+), 6 deletions(-)
> 
> diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
> index e657fad7e376..6a262e198e35 100644
> --- a/arch/s390/kernel/perf_cpum_cf.c
> +++ b/arch/s390/kernel/perf_cpum_cf.c
> @@ -980,8 +980,6 @@ static int cfdiag_push_sample(struct perf_event *event,
>  	}
>  
>  	overflow = perf_event_overflow(event, &data, &regs);
> -	if (overflow)
> -		event->pmu->stop(event, 0);
>  
>  	perf_event_update_userpage(event);
>  	return overflow;
> diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
> index ad22799d8a7d..91469401f2c9 100644
> --- a/arch/s390/kernel/perf_cpum_sf.c
> +++ b/arch/s390/kernel/perf_cpum_sf.c
> @@ -1072,10 +1072,7 @@ static int perf_push_sample(struct perf_event *event,
>  	overflow = 0;
>  	if (perf_event_exclude(event, &regs, sde_regs))
>  		goto out;
> -	if (perf_event_overflow(event, &data, &regs)) {
> -		overflow = 1;
> -		event->pmu->stop(event, 0);
> -	}
> +	overflow = perf_event_overflow(event, &data, &regs);
>  	perf_event_update_userpage(event);
>  out:
>  	return overflow;

I have installed patch 1 and 6 on top of the linux-next kernel today.
The results look good, much better than before, but I still do not
get both counter values in sync on each iteration all the time.

Tested-by: Thomas Richter <tmricht@linux.ibm.com>

-- 
Thomas Richter, Dept 3303, IBM s390 Linux Development, Boeblingen, Germany
--
IBM Deutschland Research & Development GmbH

Vorsitzender des Aufsichtsrats: Wolfgang Wendt

Geschäftsführung: David Faller

Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 06/15] s390/perf: Remove driver-specific throttle support
  2025-05-15 13:15   ` Thomas Richter
@ 2025-05-15 13:56     ` Liang, Kan
  0 siblings, 0 replies; 26+ messages in thread
From: Liang, Kan @ 2025-05-15 13:56 UTC (permalink / raw)
  To: Thomas Richter, peterz, mingo, namhyung, irogers, mark.rutland,
	linux-kernel, linux-perf-users
  Cc: eranian, ctshao, linux-s390



On 2025-05-15 9:15 a.m., Thomas Richter wrote:
> On 5/14/25 17:13, kan.liang@linux.intel.com wrote:
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> The throttle support has been added in the generic code. Remove
>> the driver-specific throttle support.
>>
>> Besides the throttle, perf_event_overflow may return true because of
>> event_limit. It already does an inatomic event disable. The pmu->stop
>> is not required either.
>>
>> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
>> Cc: Thomas Richter <tmricht@linux.ibm.com>
>> Cc: linux-s390@vger.kernel.org
>> ---
>>  arch/s390/kernel/perf_cpum_cf.c | 2 --
>>  arch/s390/kernel/perf_cpum_sf.c | 5 +----
>>  2 files changed, 1 insertion(+), 6 deletions(-)
>>
>> diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
>> index e657fad7e376..6a262e198e35 100644
>> --- a/arch/s390/kernel/perf_cpum_cf.c
>> +++ b/arch/s390/kernel/perf_cpum_cf.c
>> @@ -980,8 +980,6 @@ static int cfdiag_push_sample(struct perf_event *event,
>>  	}
>>  
>>  	overflow = perf_event_overflow(event, &data, &regs);
>> -	if (overflow)
>> -		event->pmu->stop(event, 0);
>>  
>>  	perf_event_update_userpage(event);
>>  	return overflow;
>> diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
>> index ad22799d8a7d..91469401f2c9 100644
>> --- a/arch/s390/kernel/perf_cpum_sf.c
>> +++ b/arch/s390/kernel/perf_cpum_sf.c
>> @@ -1072,10 +1072,7 @@ static int perf_push_sample(struct perf_event *event,
>>  	overflow = 0;
>>  	if (perf_event_exclude(event, &regs, sde_regs))
>>  		goto out;
>> -	if (perf_event_overflow(event, &data, &regs)) {
>> -		overflow = 1;
>> -		event->pmu->stop(event, 0);
>> -	}
>> +	overflow = perf_event_overflow(event, &data, &regs);
>>  	perf_event_update_userpage(event);
>>  out:
>>  	return overflow;
> 
> I have installed patch 1 and 6 on top of the linux-next kernel today.
> The results look good, much better than before, but I still do not
> get both counter values in sync on each iteration all the time.
>

For Intel platforms, there is a global control register which can
disable/enable all counters simultaneously. It's invoked in the
pmu_enable/disable pair. It guarantees that the group start/stop/read in
sync.
If there is no such synchronize mechanism in the hardware, the events in
a group usually start/stop one by one. There may be a small gap between
each event.

I'm not familiar with the s390. I guess it may be the cause that you
didn't get both counter values in sync.

Without hardware's help, the patch set cannot completely fix the gap
between counters, but should be able to minimize it.

> Tested-by: Thomas Richter <tmricht@linux.ibm.com>
Thanks!Kan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 01/15] perf: Fix the throttle logic for a group
  2025-05-15 12:55     ` Liang, Kan
@ 2025-05-16 12:51       ` Leo Yan
  2025-05-16 13:28         ` Liang, Kan
  0 siblings, 1 reply; 26+ messages in thread
From: Leo Yan @ 2025-05-16 12:51 UTC (permalink / raw)
  To: Liang, Kan
  Cc: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users, eranian, ctshao, tmricht

On Thu, May 15, 2025 at 08:55:05AM -0400, Liang, Kan wrote:

[...]

> >> +static void perf_event_unthrottle_group(struct perf_event *event, bool start)
> >> +{
> >> +	struct perf_event *sibling, *leader = event->group_leader;
> >> +
> >> +	perf_event_unthrottle(leader, leader != event || start);
> >> +	for_each_sibling_event(sibling, leader)
> >> +		perf_event_unthrottle(sibling, sibling != event || start);
> > 
> > Seems to me that the condition "leader != event || start" is bit tricky
> > (similarly for the check "sibling != event || start").
> > 
> > If a session sets the frequency (with option -F in perf tool), the
> > following flow is triggered:
> > 
> >   perf_adjust_freq_unthr_events()
> >     `> perf_event_unthrottle_group(event, false);
> > 
> > The argument "start" is false, so all sibling events will be enabled,
> > but the event pointed by the "event" argument remains disabled.  
> 
> Right. Because the following code will adjust the period of the event
> and start it.
> The PMU is disabled at the moment. There is no difference in starting
> the leader first or the member first.

Thanks for explaination. In the case above, as you said, all events will
be enabled either in perf_event_unthrottle_group() or in
perf_adjust_freq_unthr_events() with a recalculated period.

Just a minor suggestion. Seems to me, the parameter "start" actually
means "only_enable_sibling". For more readable, the function can be
refine as:

static void perf_event_unthrottle_group(struct perf_event *event,
                                        bool only_enable_sibling)
{
	struct perf_event *sibling, *leader = event->group_leader;

	perf_event_unthrottle(leader,
                only_enable_sibling ? leader != event : true);
        ...
}

Thanks,
Leo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 07/15] perf/arm: Remove driver-specific throttle support
  2025-05-14 15:13 ` [PATCH V2 07/15] perf/arm: " kan.liang
@ 2025-05-16 13:24   ` Leo Yan
  0 siblings, 0 replies; 26+ messages in thread
From: Leo Yan @ 2025-05-16 13:24 UTC (permalink / raw)
  To: kan.liang
  Cc: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users, eranian, ctshao, tmricht, Rob Herring,
	Vincenzo Frascino, Will Deacon

On Wed, May 14, 2025 at 08:13:53AM -0700, kan.liang@linux.intel.com wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
> 
> The throttle support has been added in the generic code. Remove
> the driver-specific throttle support.
> 
> Besides the throttle, perf_event_overflow may return true because of
> event_limit. It already does an inatomic event disable. The pmu->stop
> is not required either.
> 
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Rob Herring (Arm) <robh@kernel.org>
> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  drivers/perf/arm_pmuv3.c      | 3 +--
>  drivers/perf/arm_v6_pmu.c     | 3 +--
>  drivers/perf/arm_v7_pmu.c     | 3 +--
>  drivers/perf/arm_xscale_pmu.c | 6 ++----
>  4 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
> index e506d59654e7..3db9f4ed17e8 100644
> --- a/drivers/perf/arm_pmuv3.c
> +++ b/drivers/perf/arm_pmuv3.c
> @@ -887,8 +887,7 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
>  		 * an irq_work which will be taken care of in the handling of
>  		 * IPI_IRQ_WORK.
>  		 */
> -		if (perf_event_overflow(event, &data, regs))
> -			cpu_pmu->disable(event);
> +		perf_event_overflow(event, &data, regs);

I did a test for Arm PMUv3, sometimes I can get consistent result
crossing events, but I still saw discrepancy in some runs:

   # perf record -c 400 -C 4,5,6,7 -e "{cycles,cycles}:S" -- sleep 5

   # perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
   7 5691046123610 0x63670 [0x68]: PERF_RECORD_SAMPLE(IP, 0x1): 0/0:
   0xffff80008137a6d0 period: 400 addr: 0
   ... sample_read:
   .... group nr 2
   ..... id 00000000000000bf, value 000000000019d7a7, lost 0
   ..... id 00000000000000c3, value 000000000019d3f9, lost 0

Though it does not dismiss discrepancy totally (maybe it depends on
hardware mechanism), I do see this series can mitigate the issue
significantly.

Tested-by: Leo Yan <leo.yan@arm.com>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 01/15] perf: Fix the throttle logic for a group
  2025-05-16 12:51       ` Leo Yan
@ 2025-05-16 13:28         ` Liang, Kan
  2025-05-16 14:17           ` Leo Yan
  0 siblings, 1 reply; 26+ messages in thread
From: Liang, Kan @ 2025-05-16 13:28 UTC (permalink / raw)
  To: Leo Yan
  Cc: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users, eranian, ctshao, tmricht



On 2025-05-16 8:51 a.m., Leo Yan wrote:
> On Thu, May 15, 2025 at 08:55:05AM -0400, Liang, Kan wrote:
> 
> [...]
> 
>>>> +static void perf_event_unthrottle_group(struct perf_event *event, bool start)
>>>> +{
>>>> +	struct perf_event *sibling, *leader = event->group_leader;
>>>> +
>>>> +	perf_event_unthrottle(leader, leader != event || start);
>>>> +	for_each_sibling_event(sibling, leader)
>>>> +		perf_event_unthrottle(sibling, sibling != event || start);
>>>
>>> Seems to me that the condition "leader != event || start" is bit tricky
>>> (similarly for the check "sibling != event || start").
>>>
>>> If a session sets the frequency (with option -F in perf tool), the
>>> following flow is triggered:
>>>
>>>   perf_adjust_freq_unthr_events()
>>>     `> perf_event_unthrottle_group(event, false);
>>>
>>> The argument "start" is false, so all sibling events will be enabled,
>>> but the event pointed by the "event" argument remains disabled.  
>>
>> Right. Because the following code will adjust the period of the event
>> and start it.
>> The PMU is disabled at the moment. There is no difference in starting
>> the leader first or the member first.
> 
> Thanks for explaination. In the case above, as you said, all events will
> be enabled either in perf_event_unthrottle_group() or in
> perf_adjust_freq_unthr_events() with a recalculated period.
> 
> Just a minor suggestion. Seems to me, the parameter "start" actually
> means "only_enable_sibling". For more readable, the function can be
> refine as:
> 
> static void perf_event_unthrottle_group(struct perf_event *event,
>                                         bool only_enable_sibling)
> {
> 	struct perf_event *sibling, *leader = event->group_leader;
> 
> 	perf_event_unthrottle(leader,
>                 only_enable_sibling ? leader != event : true);
>         ...
> }
> 

It should work for the perf_adjust_freq_unthr_events(), which only start
the leader. But it's possible that the __perf_event_period() update a
sibling, not leader.

I think I can check the name to bool event_has_start.
Is the name OK?

diff --git a/kernel/events/core.c b/kernel/events/core.c
index a270fcda766d..b1cb07fa9c18 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2749,13 +2749,13 @@ static void perf_event_throttle(struct
perf_event *event)
 	perf_log_throttle(event, 0);
 }

-static void perf_event_unthrottle_group(struct perf_event *event, bool
start)
+static void perf_event_unthrottle_group(struct perf_event *event, bool
event_has_start)
 {
 	struct perf_event *sibling, *leader = event->group_leader;

-	perf_event_unthrottle(leader, leader != event || start);
+	perf_event_unthrottle(leader, event_has_start ? leader != event : true);
 	for_each_sibling_event(sibling, leader)
-		perf_event_unthrottle(sibling, sibling != event || start);
+		perf_event_unthrottle(sibling, event_has_start ? sibling != event :
true);
 }

 static void perf_event_throttle_group(struct perf_event *event)
@@ -4423,7 +4423,7 @@ static void perf_adjust_freq_unthr_events(struct
list_head *event_list)

 		if (hwc->interrupts == MAX_INTERRUPTS) {
 			perf_event_unthrottle_group(event,
-				!event->attr.freq || !event->attr.sample_freq);
+				(event->attr.freq && event->attr.sample_freq));
 		}

 		if (!event->attr.freq || !event->attr.sample_freq)
@@ -6466,7 +6466,7 @@ static void __perf_event_period(struct perf_event
*event,
 		 * while we already re-started the event/group.
 		 */
 		if (event->hw.interrupts == MAX_INTERRUPTS)
-			perf_event_unthrottle_group(event, false);
+			perf_event_unthrottle_group(event, true);
 		perf_pmu_enable(event->pmu);
 	}
 }

Thanks,
Kan


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 01/15] perf: Fix the throttle logic for a group
  2025-05-16 13:28         ` Liang, Kan
@ 2025-05-16 14:17           ` Leo Yan
  2025-05-16 14:33             ` Liang, Kan
  0 siblings, 1 reply; 26+ messages in thread
From: Leo Yan @ 2025-05-16 14:17 UTC (permalink / raw)
  To: Liang, Kan
  Cc: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users, eranian, ctshao, tmricht

On Fri, May 16, 2025 at 09:28:07AM -0400, Liang, Kan wrote:

[...]

> > Just a minor suggestion. Seems to me, the parameter "start" actually
> > means "only_enable_sibling". For more readable, the function can be
> > refine as:
> > 
> > static void perf_event_unthrottle_group(struct perf_event *event,
> >                                         bool only_enable_sibling)
> > {
> > 	struct perf_event *sibling, *leader = event->group_leader;
> > 
> > 	perf_event_unthrottle(leader,
> >                 only_enable_sibling ? leader != event : true);
> >         ...
> > }
> > 
> 
> It should work for the perf_adjust_freq_unthr_events(), which only start
> the leader.

> But it's possible that the __perf_event_period() update a
> sibling, not leader.

Should not perf_event_unthrottle_group() always enable sibling events?

The only difference is how the leader event to be enabled.  It can be
enabled in perf_event_unthrottle_group() in period mode, or in
frequency mode due to a new period value is generated, the leader
event is enabled in perf_adjust_freq_unthr_events() or in
__perf_event_period().

This is why I suggested to rename the flag to only_enable_sibling:

  true: only enable sibling events
  false: enable all events (leader event and sibling events)

Or, we can rename the flag as "skip_start_event", means to skip
enabling the event specified in the argument.

> I think I can check the name to bool event_has_start.
> Is the name OK?

I am still confused for the naming "event_has_start" :)

What exactly does it mean?

> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index a270fcda766d..b1cb07fa9c18 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2749,13 +2749,13 @@ static void perf_event_throttle(struct
> perf_event *event)
>  	perf_log_throttle(event, 0);
>  }
> 
> -static void perf_event_unthrottle_group(struct perf_event *event, bool
> start)
> +static void perf_event_unthrottle_group(struct perf_event *event, bool
> event_has_start)
>  {
>  	struct perf_event *sibling, *leader = event->group_leader;
> 
> -	perf_event_unthrottle(leader, leader != event || start);
> +	perf_event_unthrottle(leader, event_has_start ? leader != event : true);
>  	for_each_sibling_event(sibling, leader)
> -		perf_event_unthrottle(sibling, sibling != event || start);
> +		perf_event_unthrottle(sibling, event_has_start ? sibling != event :
> true);
>  }
> 
>  static void perf_event_throttle_group(struct perf_event *event)
> @@ -4423,7 +4423,7 @@ static void perf_adjust_freq_unthr_events(struct
> list_head *event_list)
> 
>  		if (hwc->interrupts == MAX_INTERRUPTS) {
>  			perf_event_unthrottle_group(event,
> -				!event->attr.freq || !event->attr.sample_freq);
> +				(event->attr.freq && event->attr.sample_freq));
>  		}
> 
>  		if (!event->attr.freq || !event->attr.sample_freq)
> @@ -6466,7 +6466,7 @@ static void __perf_event_period(struct perf_event
> *event,
>  		 * while we already re-started the event/group.
>  		 */
>  		if (event->hw.interrupts == MAX_INTERRUPTS)
> -			perf_event_unthrottle_group(event, false);
> +			perf_event_unthrottle_group(event, true);
>  		perf_pmu_enable(event->pmu);

The logic in the updated code is correct for me.

Thanks,
Leo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V2 01/15] perf: Fix the throttle logic for a group
  2025-05-16 14:17           ` Leo Yan
@ 2025-05-16 14:33             ` Liang, Kan
  0 siblings, 0 replies; 26+ messages in thread
From: Liang, Kan @ 2025-05-16 14:33 UTC (permalink / raw)
  To: Leo Yan
  Cc: peterz, mingo, namhyung, irogers, mark.rutland, linux-kernel,
	linux-perf-users, eranian, ctshao, tmricht



On 2025-05-16 10:17 a.m., Leo Yan wrote:
> On Fri, May 16, 2025 at 09:28:07AM -0400, Liang, Kan wrote:
> 
> [...]
> 
>>> Just a minor suggestion. Seems to me, the parameter "start" actually
>>> means "only_enable_sibling". For more readable, the function can be
>>> refine as:
>>>
>>> static void perf_event_unthrottle_group(struct perf_event *event,
>>>                                         bool only_enable_sibling)
>>> {
>>> 	struct perf_event *sibling, *leader = event->group_leader;
>>>
>>> 	perf_event_unthrottle(leader,
>>>                 only_enable_sibling ? leader != event : true);
>>>         ...
>>> }
>>>
>>
>> It should work for the perf_adjust_freq_unthr_events(), which only start
>> the leader.
> 
>> But it's possible that the __perf_event_period() update a
>> sibling, not leader.
> 
> Should not perf_event_unthrottle_group() always enable sibling events?
>

No. __perf_event_period() can reset the period of a sibling event. I
know it sounds weird, but it's doable.


> The only difference is how the leader event to be enabled.  It can be
> enabled in perf_event_unthrottle_group() in period mode, or in
> frequency mode due to a new period value is generated, the leader
> event is enabled in perf_adjust_freq_unthr_events() or in
> __perf_event_period().
> 
> This is why I suggested to rename the flag to only_enable_sibling:
> 
>   true: only enable sibling events
>   false: enable all events (leader event and sibling events)
> 
> Or, we can rename the flag as "skip_start_event", means to skip
> enabling the event specified in the argument.

The name "skip_start_event" sounds good to me. I will use it in V3.

Thanks,
Kan>
>> I think I can check the name to bool event_has_start.
>> Is the name OK?
> 
> I am still confused for the naming "event_has_start" :)
> 
> What exactly does it mean?
> 
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index a270fcda766d..b1cb07fa9c18 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -2749,13 +2749,13 @@ static void perf_event_throttle(struct
>> perf_event *event)
>>  	perf_log_throttle(event, 0);
>>  }
>>
>> -static void perf_event_unthrottle_group(struct perf_event *event, bool
>> start)
>> +static void perf_event_unthrottle_group(struct perf_event *event, bool
>> event_has_start)
>>  {
>>  	struct perf_event *sibling, *leader = event->group_leader;
>>
>> -	perf_event_unthrottle(leader, leader != event || start);
>> +	perf_event_unthrottle(leader, event_has_start ? leader != event : true);
>>  	for_each_sibling_event(sibling, leader)
>> -		perf_event_unthrottle(sibling, sibling != event || start);
>> +		perf_event_unthrottle(sibling, event_has_start ? sibling != event :
>> true);
>>  }
>>
>>  static void perf_event_throttle_group(struct perf_event *event)
>> @@ -4423,7 +4423,7 @@ static void perf_adjust_freq_unthr_events(struct
>> list_head *event_list)
>>
>>  		if (hwc->interrupts == MAX_INTERRUPTS) {
>>  			perf_event_unthrottle_group(event,
>> -				!event->attr.freq || !event->attr.sample_freq);
>> +				(event->attr.freq && event->attr.sample_freq));
>>  		}
>>
>>  		if (!event->attr.freq || !event->attr.sample_freq)
>> @@ -6466,7 +6466,7 @@ static void __perf_event_period(struct perf_event
>> *event,
>>  		 * while we already re-started the event/group.
>>  		 */
>>  		if (event->hw.interrupts == MAX_INTERRUPTS)
>> -			perf_event_unthrottle_group(event, false);
>> +			perf_event_unthrottle_group(event, true);
>>  		perf_pmu_enable(event->pmu);
> 
> The logic in the updated code is correct for me.
> 
> Thanks,
> Leo
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2025-05-16 14:33 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-14 15:13 [PATCH V2 00/15] perf: Fix the throttle logic for group kan.liang
2025-05-14 15:13 ` [PATCH V2 01/15] perf: Fix the throttle logic for a group kan.liang
2025-05-15  9:43   ` Leo Yan
2025-05-15 12:55     ` Liang, Kan
2025-05-16 12:51       ` Leo Yan
2025-05-16 13:28         ` Liang, Kan
2025-05-16 14:17           ` Leo Yan
2025-05-16 14:33             ` Liang, Kan
2025-05-14 15:13 ` [PATCH V2 02/15] perf/x86/intel: Remove driver-specific throttle support kan.liang
2025-05-14 15:13 ` [PATCH V2 03/15] perf/x86/amd: " kan.liang
2025-05-14 15:13 ` [PATCH V2 04/15] perf/x86/zhaoxin: " kan.liang
2025-05-14 15:13 ` [PATCH V2 05/15] powerpc/perf: " kan.liang
2025-05-14 15:13 ` [PATCH V2 06/15] s390/perf: " kan.liang
2025-05-15 13:15   ` Thomas Richter
2025-05-15 13:56     ` Liang, Kan
2025-05-14 15:13 ` [PATCH V2 07/15] perf/arm: " kan.liang
2025-05-16 13:24   ` Leo Yan
2025-05-14 15:13 ` [PATCH V2 08/15] perf/apple_m1: " kan.liang
2025-05-14 15:13 ` [PATCH V2 09/15] alpha/perf: " kan.liang
2025-05-14 15:13 ` [PATCH V2 10/15] arc/perf: " kan.liang
2025-05-14 15:13 ` [PATCH V2 11/15] csky/perf: " kan.liang
2025-05-15  6:34   ` Guo Ren
2025-05-14 15:13 ` [PATCH V2 12/15] loongarch/perf: " kan.liang
2025-05-14 15:13 ` [PATCH V2 13/15] sparc/perf: " kan.liang
2025-05-14 15:14 ` [PATCH V2 14/15] xtensa/perf: " kan.liang
2025-05-14 15:14 ` [PATCH V2 15/15] mips/perf: " kan.liang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).