* [PATCH V2] perf/x86: Consider pinned events for group validation
@ 2019-08-22 15:47 kan.liang
2019-08-22 18:22 ` Andi Kleen
0 siblings, 1 reply; 5+ messages in thread
From: kan.liang @ 2019-08-22 15:47 UTC (permalink / raw)
To: peterz, mingo, acme, linux-kernel; +Cc: eranian, ak, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
perf stat -M metrics relies on weak groups to reject unschedulable
groups and run them as non-groups.
This uses the group validation code in the kernel. Unfortunately
that code doesn't take pinned events, such as the NMI watchdog, into
account. So some groups can pass validation, but then later still
never schedule.
For example,
$echo 1 > /proc/sys/kernel/nmi_watchdog
$perf stat -M Page_Walks_Utilization
Performance counter stats for 'system wide':
<not counted> itlb_misses.walk_pending
(0.00%)
<not counted> dtlb_load_misses.walk_pending
(0.00%)
<not counted> dtlb_store_misses.walk_pending
(0.00%)
<not counted> ept.walk_pending
(0.00%)
<not counted> cycles
(0.00%)
1.176613558 seconds time elapsed
Current pinned events are always scheduled first. So the new group must
can be scheduled together with current pinned events. Otherwise, it will
never get a chance to be scheduled later.
The trick is to pretend the current pinned events as part of the new
group, and insert them into the fake_cpuc.
The simulation result will tell if they can be scheduled successfully.
The fake_cpuc never touch event state. The current pinned events will
not be impacted.
Disabling interrupts to prevent the events in current CPU's cpuc going
away and getting freed.
It won't catch all possible cases that cannot be scheduled, such as
events pinned differently on different CPUs, or complicated constraints.
The validation is based on current environment. It doesn't help on the
case, which first create a group and then a pinned event, either.
But for the most common case, the NMI watchdog interacting with the
current perf metrics, it is strong enough.
After applying the patch,
$echo 1 > /proc/sys/kernel/nmi_watchdog
$ perf stat -M Page_Walks_Utilization
Performance counter stats for 'system wide':
2,491,910 itlb_misses.walk_pending # 0.0
Page_Walks_Utilization (79.94%)
13,630,942 dtlb_load_misses.walk_pending
(80.02%)
207,255 dtlb_store_misses.walk_pending
(80.04%)
0 ept.walk_pending
(80.04%)
236,204,924 cycles
(79.97%)
0.901785713 seconds time elapsed
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
The V2 still only check current CPU's cpuc. Because I think we cannot
prevent the cpuc in other CPU without a lock. Adding a lock will
introduce extra overhead in some critical path, e.g. context switch.
The patch is good enough for the common case. We may leave the other
complicated cases as they are.
Changes since V1:
- Disabling interrupts to prevent the events in current CPU's cpuc
going away and getting freed.
- Update comments and description
arch/x86/events/core.c | 34 +++++++++++++++++++++++++++++++++-
1 file changed, 33 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 81b005e..b59154e 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2011,9 +2011,12 @@ static int validate_event(struct perf_event *event)
*/
static int validate_group(struct perf_event *event)
{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct perf_event *leader = event->group_leader;
struct cpu_hw_events *fake_cpuc;
- int ret = -EINVAL, n;
+ struct perf_event *pinned_event;
+ int ret = -EINVAL, n, i;
+ unsigned long flags;
fake_cpuc = allocate_fake_cpuc();
if (IS_ERR(fake_cpuc))
@@ -2033,9 +2036,38 @@ static int validate_group(struct perf_event *event)
if (n < 0)
goto out;
+ /*
+ * Disable interrupts to prevent the events in this CPU's cpuc
+ * going away and getting freed.
+ */
+ local_irq_save(flags);
+
+ /*
+ * The new group must can be scheduled together with current pinned
+ * events. Otherwise, it will never get a chance to be scheduled later.
+ *
+ * It won't catch all possible cases that cannot schedule, such as
+ * events pinned on CPU1, but the validation for a new CPU1 event
+ * running on other CPU. However, it's good enough to handle common
+ * cases like the global NMI watchdog.
+ */
+ for (i = 0; i < cpuc->n_events; i++) {
+ pinned_event = cpuc->event_list[i];
+ if (WARN_ON_ONCE(!pinned_event))
+ continue;
+ if (!pinned_event->attr.pinned)
+ continue;
+ fake_cpuc->n_events = n;
+ n = collect_events(fake_cpuc, pinned_event, false);
+ if (n < 0)
+ goto irq;
+ }
+
fake_cpuc->n_events = 0;
ret = x86_pmu.schedule_events(fake_cpuc, n, NULL);
+irq:
+ local_irq_restore(flags);
out:
free_fake_cpuc(fake_cpuc);
return ret;
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread* Re: [PATCH V2] perf/x86: Consider pinned events for group validation
2019-08-22 15:47 [PATCH V2] perf/x86: Consider pinned events for group validation kan.liang
@ 2019-08-22 18:22 ` Andi Kleen
2019-08-22 18:29 ` Thomas Gleixner
0 siblings, 1 reply; 5+ messages in thread
From: Andi Kleen @ 2019-08-22 18:22 UTC (permalink / raw)
To: kan.liang; +Cc: peterz, mingo, acme, linux-kernel, eranian
> + /*
> + * Disable interrupts to prevent the events in this CPU's cpuc
> + * going away and getting freed.
> + */
> + local_irq_save(flags);
I believe it's also needed to disable preemption. Probably should
add a comment, or better an explicit preempt_disable() too.
-Andi
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH V2] perf/x86: Consider pinned events for group validation
2019-08-22 18:22 ` Andi Kleen
@ 2019-08-22 18:29 ` Thomas Gleixner
2019-08-22 18:55 ` Andi Kleen
0 siblings, 1 reply; 5+ messages in thread
From: Thomas Gleixner @ 2019-08-22 18:29 UTC (permalink / raw)
To: Andi Kleen; +Cc: kan.liang, peterz, mingo, acme, linux-kernel, eranian
On Thu, 22 Aug 2019, Andi Kleen wrote:
> > + /*
> > + * Disable interrupts to prevent the events in this CPU's cpuc
> > + * going away and getting freed.
> > + */
> > + local_irq_save(flags);
>
> I believe it's also needed to disable preemption. Probably should
> add a comment, or better an explicit preempt_disable() too.
Preemption is implicit disabled by disabling interrupts.
Thanks,
tglx
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH V2] perf/x86: Consider pinned events for group validation
2019-08-22 18:29 ` Thomas Gleixner
@ 2019-08-22 18:55 ` Andi Kleen
2019-08-23 2:05 ` Thomas Gleixner
0 siblings, 1 reply; 5+ messages in thread
From: Andi Kleen @ 2019-08-22 18:55 UTC (permalink / raw)
To: Thomas Gleixner; +Cc: kan.liang, peterz, mingo, acme, linux-kernel, eranian
On Thu, Aug 22, 2019 at 08:29:46PM +0200, Thomas Gleixner wrote:
> On Thu, 22 Aug 2019, Andi Kleen wrote:
>
> > > + /*
> > > + * Disable interrupts to prevent the events in this CPU's cpuc
> > > + * going away and getting freed.
> > > + */
> > > + local_irq_save(flags);
> >
> > I believe it's also needed to disable preemption. Probably should
> > add a comment, or better an explicit preempt_disable() too.
>
> Preemption is implicit disabled by disabling interrupts.
Yes it is, this is more for documentation.
-Andi
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH V2] perf/x86: Consider pinned events for group validation
2019-08-22 18:55 ` Andi Kleen
@ 2019-08-23 2:05 ` Thomas Gleixner
0 siblings, 0 replies; 5+ messages in thread
From: Thomas Gleixner @ 2019-08-23 2:05 UTC (permalink / raw)
To: Andi Kleen; +Cc: kan.liang, peterz, mingo, acme, linux-kernel, eranian
On Thu, 22 Aug 2019, Andi Kleen wrote:
> On Thu, Aug 22, 2019 at 08:29:46PM +0200, Thomas Gleixner wrote:
> > On Thu, 22 Aug 2019, Andi Kleen wrote:
> >
> > > > + /*
> > > > + * Disable interrupts to prevent the events in this CPU's cpuc
> > > > + * going away and getting freed.
> > > > + */
> > > > + local_irq_save(flags);
> > >
> > > I believe it's also needed to disable preemption. Probably should
> > > add a comment, or better an explicit preempt_disable() too.
> >
> > Preemption is implicit disabled by disabling interrupts.
>
> Yes it is, this is more for documentation.
Comment, yes. An explicit preempt_disable() surely not.
Thanks,
tglx
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-08-23 2:05 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-08-22 15:47 [PATCH V2] perf/x86: Consider pinned events for group validation kan.liang
2019-08-22 18:22 ` Andi Kleen
2019-08-22 18:29 ` Thomas Gleixner
2019-08-22 18:55 ` Andi Kleen
2019-08-23 2:05 ` Thomas Gleixner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox