* [PATCH 0/2] Two semi-related perf throttling fixes
@ 2026-03-31 15:25 Calvin Owens
2026-03-31 15:25 ` [PATCH 1/2] perf/x86: Avoid double accounting of PMU NMI latencies Calvin Owens
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Calvin Owens @ 2026-03-31 15:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-perf-users, x86, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
Hi all,
In the course of investigating [1], I set out to understand why this
sequence of messages is printed every boot, even when nobody is using
perf at all:
perf: interrupt took too long (2516 > 2500), lowering kernel.perf_event_max_sample_rate to 79000
perf: interrupt took too long (3156 > 3145), lowering kernel.perf_event_max_sample_rate to 63000
perf: interrupt took too long (4014 > 3945), lowering kernel.perf_event_max_sample_rate to 49000
perf: interrupt took too long (5035 > 5017), lowering kernel.perf_event_max_sample_rate to 39000
perf: interrupt took too long (6302 > 6293), lowering kernel.perf_event_max_sample_rate to 31000
perf: interrupt took too long (7879 > 7877), lowering kernel.perf_event_max_sample_rate to 25000
perf: interrupt took too long (9852 > 9848), lowering kernel.perf_event_max_sample_rate to 20000
It turns out this happens because of how the dynamic sample rate
throttling interacts with the perf hardware watchdog. Patch [2/2] is my
attempt to prevent the dynamic throttling logic from acting solely based
on the latency of the watchdog NMI.
Intel CPUs were happy with that. But AMD CPUs still printed the messages!
That happens because AMD CPUs have a second PMU facility with its own
NMI handler, and both NMI handlers average in their latency, even when
they don't actually handle the NMI.
Patch [1/2] fixes that, which is a correctness issue entirely
independent of patch [2/2]. But it also happens to be required for patch
[2/2] to achieve its goal on AMD CPUs, so I sent them together.
Thanks,
Calvin
[1] https://lore.kernel.org/all/acMe-QZUel-bBYUh@mozart.vkv.me/
Calvin Owens (2):
perf/x86: Avoid double accounting of PMU NMI latencies
perf: Don't throttle based on NMI watchdog events
arch/x86/events/amd/ibs.c | 6 +++---
arch/x86/events/core.c | 3 ++-
kernel/events/core.c | 14 ++++++++++++++
3 files changed, 19 insertions(+), 4 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/2] perf/x86: Avoid double accounting of PMU NMI latencies
2026-03-31 15:25 [PATCH 0/2] Two semi-related perf throttling fixes Calvin Owens
@ 2026-03-31 15:25 ` Calvin Owens
2026-03-31 15:25 ` [PATCH 2/2] perf: Don't throttle based on NMI watchdog events Calvin Owens
2026-04-01 8:01 ` [PATCH 0/2] Two semi-related perf throttling fixes Andi Kleen
2 siblings, 0 replies; 8+ messages in thread
From: Calvin Owens @ 2026-03-31 15:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-perf-users, x86, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
Because NMIs always poll all handlers, calling perf_sample_event_took()
unconditionally in perf_ibs_nmi_handler() and perf_event_nmi_handler()
causes two latency numbers to be fed into the exponentially weighted
moving average for each NMI on AMD machines, one of which is much
smaller than the other:
<...>-70985 [029] d.Z1. 13311.704313: nmi_handler: perf_event_nmi_handler() delta_ns: 6732 handled: 1
<...>-70985 [029] d.Z1. 13311.704317: nmi_handler: nmi_cpu_backtrace_handler() delta_ns: 1673 handled: 0
<...>-70985 [029] d.Z1. 13311.704319: nmi_handler: perf_ibs_nmi_handler() delta_ns: 2064 handled: 0
This can bias the average unrealistically low, in this case because the
latency of perf_ibs_handle_irq() doing nothing is averaged with the
latency of amd_pmu_v2_handle_irq() doing real work:
# bpftrace -e 'kprobe:perf_sample_event_took {\
printf("%s: cpu=%02d sample_len_ns=%d\n", strftime("%S.%f", nsecs), cpu(), arg0); }'
Attached 1 probe
02.836860: cpu=17 sample_len_ns=7775
02.836871: cpu=17 sample_len_ns=1492 // avg=4634
03.042803: cpu=20 sample_len_ns=4298
03.042810: cpu=20 sample_len_ns=1152 // avg=2725
03.204410: cpu=27 sample_len_ns=6973
03.204420: cpu=27 sample_len_ns=1302 // avg=4137
03.622364: cpu=00 sample_len_ns=5270
03.622371: cpu=00 sample_len_ns=992 // avg=3131
Avoid the problem by only accounting the latency of the handler which
actually handled the NMI.
Fixes: c2872d381f1a ("perf/x86/ibs: Add IBS interrupt to the dynamic throttle")
Signed-off-by: Calvin Owens <calvin@wbinvd.org>
---
arch/x86/events/amd/ibs.c | 6 +++---
arch/x86/events/core.c | 3 ++-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
index aca89f23d2e0..036385de2123 100644
--- a/arch/x86/events/amd/ibs.c
+++ b/arch/x86/events/amd/ibs.c
@@ -1402,10 +1402,10 @@ perf_ibs_nmi_handler(unsigned int cmd, struct pt_regs *regs)
handled += perf_ibs_handle_irq(&perf_ibs_fetch, regs);
handled += perf_ibs_handle_irq(&perf_ibs_op, regs);
- if (handled)
+ if (handled) {
inc_irq_stat(apic_perf_irqs);
-
- perf_sample_event_took(sched_clock() - stamp);
+ perf_sample_event_took(sched_clock() - stamp);
+ }
return handled;
}
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 810ab21ffd99..d1c7612e2e5b 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1814,7 +1814,8 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
ret = static_call(x86_pmu_handle_irq)(regs);
finish_clock = sched_clock();
- perf_sample_event_took(finish_clock - start_clock);
+ if (ret)
+ perf_sample_event_took(finish_clock - start_clock);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/2] perf: Don't throttle based on NMI watchdog events
2026-03-31 15:25 [PATCH 0/2] Two semi-related perf throttling fixes Calvin Owens
2026-03-31 15:25 ` [PATCH 1/2] perf/x86: Avoid double accounting of PMU NMI latencies Calvin Owens
@ 2026-03-31 15:25 ` Calvin Owens
2026-03-31 17:22 ` Calvin Owens
2026-04-01 8:01 ` [PATCH 0/2] Two semi-related perf throttling fixes Andi Kleen
2 siblings, 1 reply; 8+ messages in thread
From: Calvin Owens @ 2026-03-31 15:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-perf-users, x86, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
The throttling logic in perf_sample_event_took() assumes the NMI is
running at the maximum allowed sample rate. While this makes sense most
of the time, it wildly overestimates the runtime of the NMI for the perf
hardware watchdog:
# bpftrace -e 'kprobe:perf_sample_event_took { \
printf("%s: cpu=%02d time_taken=%dns\n", \
strftime("%H:%M:%S.%f", nsecs), cpu(), arg0); }'
03:12:13.087003: cpu=00 time_taken=3190ns
03:12:13.486789: cpu=01 time_taken=2918ns
03:12:18.075288: cpu=03 time_taken=3308ns
03:12:19.797207: cpu=02 time_taken=2581ns
03:12:23.110317: cpu=00 time_taken=2823ns
03:12:23.510308: cpu=01 time_taken=2943ns
03:12:29.229348: cpu=03 time_taken=3669ns
03:12:31.656306: cpu=02 time_taken=3262ns
The NMI for the watchdog runs for 2-4us every ten seconds, but the
math done in perf_sample_event_took() concludes it is running for
200-400ms every second!
When it is the only PMU event running, it can take minutes to hours of
samples from the watchdog for the moving average to accumulate to
something near the real mean, which causes the same little "litany" of
sample rate throttles to happen every time Linux boots with the perf
hardware watchdog enabled:
perf: interrupt took too long (2526 > 2500), lowering kernel.perf_event_max_sample_rate to 79000
perf: interrupt took too long (3177 > 3157), lowering kernel.perf_event_max_sample_rate to 62000
perf: interrupt took too long (3979 > 3971), lowering kernel.perf_event_max_sample_rate to 50000
perf: interrupt took too long (4983 > 4973), lowering kernel.perf_event_max_sample_rate to 40000
This serves no purpose: it doesn't actually affect the runtime of the
watchdog NMI at all. It confuses users, because it suggests their
machine is spinning its wheels in interrupts when it isn't.
Because the watchdog NMI is so infrequent, we can avoid throttling it by
making the throttling a two-step process: load and update a timestamp
whenever we think we need to throttle, and only actually proceed to
throttle if the last time that happened was less than one second ago.
This is inelegant, but it avoids touching the hot path and preserves
current throttling behavior for real PMU use, at the cost of delaying
the throttling by a single NMI.
Signed-off-by: Calvin Owens <calvin@wbinvd.org>
---
kernel/events/core.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 89b40e439717..0f7a7e912f55 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -623,6 +623,7 @@ core_initcall(init_events_core_sysctls);
*/
#define NR_ACCUMULATED_SAMPLES 128
static DEFINE_PER_CPU(u64, running_sample_length);
+static DEFINE_PER_CPU(u64, last_throttle_clock);
static u64 __report_avg;
static u64 __report_allowed;
@@ -643,6 +644,8 @@ void perf_sample_event_took(u64 sample_len_ns)
u64 max_len = READ_ONCE(perf_sample_allowed_ns);
u64 running_len;
u64 avg_len;
+ u64 delta;
+ u64 now;
u32 max;
if (max_len == 0)
@@ -663,6 +666,17 @@ void perf_sample_event_took(u64 sample_len_ns)
if (avg_len <= max_len)
return;
+ /*
+ * Very infrequent events like the perf counter hard watchdog
+ * can trigger spurious throttling: skip throttling if the prior
+ * NMI got here more than one second before this NMI began.
+ */
+ now = local_clock();
+ delta = now - __this_cpu_read(last_throttle_clock);
+ __this_cpu_write(last_throttle_clock, now);
+ if (delta - sample_len_ns > NSEC_PER_SEC)
+ return;
+
__report_avg = avg_len;
__report_allowed = max_len;
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] perf: Don't throttle based on NMI watchdog events
2026-03-31 15:25 ` [PATCH 2/2] perf: Don't throttle based on NMI watchdog events Calvin Owens
@ 2026-03-31 17:22 ` Calvin Owens
2026-03-31 17:43 ` Calvin Owens
2026-03-31 18:10 ` Calvin Owens
0 siblings, 2 replies; 8+ messages in thread
From: Calvin Owens @ 2026-03-31 17:22 UTC (permalink / raw)
To: linux-kernel
Cc: linux-perf-users, x86, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
On Tuesday 03/31 at 08:25 -0700, Calvin Owens wrote:
> @@ -663,6 +666,17 @@ void perf_sample_event_took(u64 sample_len_ns)
> if (avg_len <= max_len)
> return;
>
> + /*
> + * Very infrequent events like the perf counter hard watchdog
> + * can trigger spurious throttling: skip throttling if the prior
> + * NMI got here more than one second before this NMI began.
> + */
> + now = local_clock();
> + delta = now - __this_cpu_read(last_throttle_clock);
> + __this_cpu_write(last_throttle_clock, now);
> + if (delta - sample_len_ns > NSEC_PER_SEC)
> + return;
Bah, Sashiko caught something obvious I missed:
https://sashiko.dev/#/patchset/cover.1774969692.git.calvin%40wbinvd.org
>> When the outer handler completes, its sample_len_ns (total execution
>> time) will be strictly greater than delta (time since the inner
>> handler finished). This guarantees delta < sample_len_ns, causing the
>> subtraction to underflow to a massive positive value.
>>
>> The condition > NSEC_PER_SEC will then evaluate to true, and the outer
>> handler will erroneously skip the perf throttling logic. Should this
>> check be rewritten to avoid subtraction, perhaps by using if (delta >
>> sample_len_ns + NSEC_PER_SEC)?
The solution it proposed makes sense to me.
> __report_avg = avg_len;
> __report_allowed = max_len;
>
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] perf: Don't throttle based on NMI watchdog events
2026-03-31 17:22 ` Calvin Owens
@ 2026-03-31 17:43 ` Calvin Owens
2026-03-31 18:10 ` Calvin Owens
1 sibling, 0 replies; 8+ messages in thread
From: Calvin Owens @ 2026-03-31 17:43 UTC (permalink / raw)
To: linux-kernel
Cc: linux-perf-users, x86, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
On Tuesday 03/31 at 10:22 -0700, Calvin Owens wrote:
> On Tuesday 03/31 at 08:25 -0700, Calvin Owens wrote:
> > @@ -663,6 +666,17 @@ void perf_sample_event_took(u64 sample_len_ns)
> > if (avg_len <= max_len)
> > return;
> >
> > + /*
> > + * Very infrequent events like the perf counter hard watchdog
> > + * can trigger spurious throttling: skip throttling if the prior
> > + * NMI got here more than one second before this NMI began.
> > + */
> > + now = local_clock();
> > + delta = now - __this_cpu_read(last_throttle_clock);
> > + __this_cpu_write(last_throttle_clock, now);
> > + if (delta - sample_len_ns > NSEC_PER_SEC)
> > + return;
Apologies for replying twice in a row...
Sashiko made a second useful observation:
>> There appears to be no upper bound on sample_len_ns itself. If an
>> event takes 5 seconds to run but is configured to fire only once every
>> 7 seconds, the idle time will be 2 seconds.
>>
>> Because 2 seconds is > NSEC_PER_SEC, the throttling logic is skipped
>> entirely. This defeats the sysctl_perf_cpu_time_max_percent safeguard
>> and allows an event to monopolize the CPU in NMI/IRQ context for
>> seconds at a time without ever being throttled.
I'm skeptical that would ever actually happen, but I think I can address
that by adding:
&& sample_len_ns < NSEC_PER_SEC
...to the skip throttle condition?
In fairness to the LLM skeptics, the feedback Sashiko gave on patch 1/2
is absolute nonsense.
Thanks,
Calvin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] perf: Don't throttle based on NMI watchdog events
2026-03-31 17:22 ` Calvin Owens
2026-03-31 17:43 ` Calvin Owens
@ 2026-03-31 18:10 ` Calvin Owens
2026-03-31 21:07 ` Calvin Owens
1 sibling, 1 reply; 8+ messages in thread
From: Calvin Owens @ 2026-03-31 18:10 UTC (permalink / raw)
To: linux-kernel
Cc: linux-perf-users, x86, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
On Tuesday 03/31 at 10:22 -0700, Calvin Owens wrote:
> On Tuesday 03/31 at 08:25 -0700, Calvin Owens wrote:
> > @@ -663,6 +666,17 @@ void perf_sample_event_took(u64 sample_len_ns)
> > if (avg_len <= max_len)
> > return;
> >
> > + /*
> > + * Very infrequent events like the perf counter hard watchdog
> > + * can trigger spurious throttling: skip throttling if the prior
> > + * NMI got here more than one second before this NMI began.
> > + */
> > + now = local_clock();
> > + delta = now - __this_cpu_read(last_throttle_clock);
> > + __this_cpu_write(last_throttle_clock, now);
> > + if (delta - sample_len_ns > NSEC_PER_SEC)
> > + return;
>
> Bah, Sashiko caught something obvious I missed:
>
> https://sashiko.dev/#/patchset/cover.1774969692.git.calvin%40wbinvd.org
>
> >> When the outer handler completes, its sample_len_ns (total execution
> >> time) will be strictly greater than delta (time since the inner
> >> handler finished). This guarantees delta < sample_len_ns, causing the
> >> subtraction to underflow to a massive positive value.
> >>
> >> The condition > NSEC_PER_SEC will then evaluate to true, and the outer
> >> handler will erroneously skip the perf throttling logic. Should this
> >> check be rewritten to avoid subtraction, perhaps by using if (delta >
> >> sample_len_ns + NSEC_PER_SEC)?
>
> The solution it proposed makes sense to me.
I replied too quickly: I think Sashiko is actually wrong.
It is assuming that sample_len_ns includes the latency of
perf_sample_event_took(), but it does not.
Nesting in the middle of the RMW of the percpu value strictly makes
last_throttle_clock appear to have happened *sooner* to the outer NMI,
so I think that case works.
Thanks, apologies again for all the noise here,
Calvin
> > __report_avg = avg_len;
> > __report_allowed = max_len;
> >
> > --
> > 2.47.3
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] perf: Don't throttle based on NMI watchdog events
2026-03-31 18:10 ` Calvin Owens
@ 2026-03-31 21:07 ` Calvin Owens
0 siblings, 0 replies; 8+ messages in thread
From: Calvin Owens @ 2026-03-31 21:07 UTC (permalink / raw)
To: linux-kernel
Cc: linux-perf-users, x86, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
On Tuesday 03/31 at 11:10 -0700, Calvin Owens wrote:
> On Tuesday 03/31 at 10:22 -0700, Calvin Owens wrote:
> > On Tuesday 03/31 at 08:25 -0700, Calvin Owens wrote:
> > > @@ -663,6 +666,17 @@ void perf_sample_event_took(u64 sample_len_ns)
> > > if (avg_len <= max_len)
> > > return;
> > >
> > > + /*
> > > + * Very infrequent events like the perf counter hard watchdog
> > > + * can trigger spurious throttling: skip throttling if the prior
> > > + * NMI got here more than one second before this NMI began.
> > > + */
> > > + now = local_clock();
> > > + delta = now - __this_cpu_read(last_throttle_clock);
> > > + __this_cpu_write(last_throttle_clock, now);
> > > + if (delta - sample_len_ns > NSEC_PER_SEC)
> > > + return;
> >
> > Bah, Sashiko caught something obvious I missed:
> >
> > https://sashiko.dev/#/patchset/cover.1774969692.git.calvin%40wbinvd.org
> >
> > >> When the outer handler completes, its sample_len_ns (total execution
> > >> time) will be strictly greater than delta (time since the inner
> > >> handler finished). This guarantees delta < sample_len_ns, causing the
> > >> subtraction to underflow to a massive positive value.
> > >>
> > >> The condition > NSEC_PER_SEC will then evaluate to true, and the outer
> > >> handler will erroneously skip the perf throttling logic. Should this
> > >> check be rewritten to avoid subtraction, perhaps by using if (delta >
> > >> sample_len_ns + NSEC_PER_SEC)?
> >
> > The solution it proposed makes sense to me.
>
> I replied too quickly: I think Sashiko is actually wrong.
Last time, I swear to god. I worked this out, it is indeed correct.
The relevant RMW is:
now = local_clock()
delta = now = last_throttle_clock;
last_throttle_clock = now
Assume last_throttle_clock starts at zero.
Normal case:
NMI >>> sample_len_ns=1000ns
now = 1010
delta = 1010
last_throttle_clock = 1010
(1010 - 0 >_NSEC_PER_SEC) == false
Nesting case 1:
NMI >>> sample_len_ns=1000ns
now = 1010
NMI >>> sample_len_ns=1000ns
now = 2020
delta = 2020;
last_throttle_clock = 2020
(2020 - 0 > NSEC_PER_SEC) == false
// does not skip throttle
delta = *underflow*
last_throttle_clock = 1010
(1010 - *underflow* > NSEC_PER_SEC) == true
// skips throttle
Nesting case 2:
NMI >>> sample_len_ns=1000ns
now = 1010
delta = 1010
NMI >>> sample_len_ns=1000ns
now = 2020
delta = 2020
last_throttle_clock = 2020
(2020 - 0 > NSEC_PER_SEC) == false
// does not skip throttle
last_throttle_clock = 1010
(1010 - 1000 > NSEC_PER_SEC) == true
// skips throttle
I think the below deals with it. But I will wait to hear back before
sending a V2.
Thanks,
Calvin
---
kernel/events/core.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 89b40e439717..c51d61fbb03b 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -623,6 +623,7 @@ core_initcall(init_events_core_sysctls);
*/
#define NR_ACCUMULATED_SAMPLES 128
static DEFINE_PER_CPU(u64, running_sample_length);
+static DEFINE_PER_CPU(u64, last_throttle_clock);
static u64 __report_avg;
static u64 __report_allowed;
@@ -643,6 +644,8 @@ void perf_sample_event_took(u64 sample_len_ns)
u64 max_len = READ_ONCE(perf_sample_allowed_ns);
u64 running_len;
u64 avg_len;
+ u64 last;
+ u64 now;
u32 max;
if (max_len == 0)
@@ -663,6 +666,18 @@ void perf_sample_event_took(u64 sample_len_ns)
if (avg_len <= max_len)
return;
+ /*
+ * Very infrequent events like the perf counter hard watchdog
+ * can trigger spurious throttling: skip throttling if the prior
+ * NMI got here more than one second before this NMI began. But
+ * if NMIs are nesting, never skip throttling.
+ */
+ now = local_clock();
+ last = __this_cpu_read(last_throttle_clock);
+ if (this_cpu_try_cmpxchg(last_throttle_clock, last, now) &&
+ now - last > NSEC_PER_SEC)
+ return;
+
__report_avg = avg_len;
__report_allowed = max_len;
--
2.47.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 0/2] Two semi-related perf throttling fixes
2026-03-31 15:25 [PATCH 0/2] Two semi-related perf throttling fixes Calvin Owens
2026-03-31 15:25 ` [PATCH 1/2] perf/x86: Avoid double accounting of PMU NMI latencies Calvin Owens
2026-03-31 15:25 ` [PATCH 2/2] perf: Don't throttle based on NMI watchdog events Calvin Owens
@ 2026-04-01 8:01 ` Andi Kleen
2 siblings, 0 replies; 8+ messages in thread
From: Andi Kleen @ 2026-04-01 8:01 UTC (permalink / raw)
To: Calvin Owens
Cc: linux-kernel, linux-perf-users, x86, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
Calvin Owens <calvin@wbinvd.org> writes:
> Hi all,
>
> In the course of investigating [1], I set out to understand why this
> sequence of messages is printed every boot, even when nobody is using
> perf at all:
I don't think i've ever seen that just from the NMI watchdog. I wonder
what is different on your machines. And of course the PMU based NMI
watchdog is on the way out.
But the fixes make sense to me.
Reviewed-by: Andi Kleen <ak@kernel.org>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-04-01 8:01 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 15:25 [PATCH 0/2] Two semi-related perf throttling fixes Calvin Owens
2026-03-31 15:25 ` [PATCH 1/2] perf/x86: Avoid double accounting of PMU NMI latencies Calvin Owens
2026-03-31 15:25 ` [PATCH 2/2] perf: Don't throttle based on NMI watchdog events Calvin Owens
2026-03-31 17:22 ` Calvin Owens
2026-03-31 17:43 ` Calvin Owens
2026-03-31 18:10 ` Calvin Owens
2026-03-31 21:07 ` Calvin Owens
2026-04-01 8:01 ` [PATCH 0/2] Two semi-related perf throttling fixes Andi Kleen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox