public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Namhyung Kim <namhyung@kernel.org>
Cc: "Liang, Kan" <kan.liang@linux.intel.com>,
	Ingo Molnar <mingo@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Ravi Bangoria <ravi.bangoria@amd.com>,
	Stephane Eranian <eranian@google.com>,
	Ian Rogers <irogers@google.com>,
	Mingwei Zhang <mizhang@google.com>
Subject: Re: [PATCH v2] perf/core: Optimize event reschedule for a PMU
Date: Tue, 6 Aug 2024 09:56:30 +0200	[thread overview]
Message-ID: <20240806075630.GL37996@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <CAM9d7cj8YMt-YiVZ=7dRiEnfODqo=WLRJ87Rd134YR_O6MU_Qg@mail.gmail.com>

On Mon, Aug 05, 2024 at 11:19:48PM -0700, Namhyung Kim wrote:
> On Mon, Aug 5, 2024 at 7:58 AM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Mon, Aug 05, 2024 at 11:20:58AM +0200, Peter Zijlstra wrote:
> > > On Fri, Aug 02, 2024 at 02:30:19PM -0400, Liang, Kan wrote:
> > > > > @@ -2792,7 +2833,14 @@ static int  __perf_install_in_context(void *info)
> > > > >   if (reprogram) {
> > > > >           ctx_sched_out(ctx, EVENT_TIME);
> > > > >           add_event_to_ctx(event, ctx);
> > > > > -         ctx_resched(cpuctx, task_ctx, get_event_type(event));
> > > > > +         if (ctx->nr_events == 1) {
> > > > > +                 /* The first event needs to set ctx->is_active. */
> > > > > +                 ctx_resched(cpuctx, task_ctx, NULL, get_event_type(event));
> > > > > +         } else {
> > > > > +                 ctx_resched(cpuctx, task_ctx, event->pmu_ctx->pmu,
> > > > > +                             get_event_type(event));
> > > > > +                 ctx_sched_in(ctx, EVENT_TIME);
> > > >
> > > > The changelog doesn't mention the time difference much. As my
> > > > understanding, the time is shared among PMUs in the same ctx.
> > > > When perf does ctx_resched(), the time is deducted.
> > > > There is no problem to stop and restart the global time when perf
> > > > re-schedule all PMUs.
> > > > But if only one PMU is re-scheduled while others are still running, it
> > > > may be a problem to stop and restart the global time. Other PMUs will be
> > > > impacted.
> > >
> > > So afaict, since we hold ctx->lock, nobody can observe EVENT_TIME was
> > > cleared for a little while.
> > >
> > > So the point was to make all the various ctx_sched_out() calls have the
> > > same timestamp. It does this by clearing EVENT_TIME first. Then the
> > > first ctx_sched_in() will set it again, and later ctx_sched_in() won't
> > > touch time.
> > >
> > > That leaves a little hole, because the time between
> > > ctx_sched_out(EVENT_TIME) and the first ctx_sched_in() gets lost.
> > >
> > > This isn't typically a problem, but not very nice. Let me go find an
> > > alternative solution for this. The simple update I did saturday is
> > > broken as per the perf test.
> >
> > OK, took a little longer than I would have liked, nor is it entirely
> > pretty, but it seems to pass 'perf test'.
> >
> > Please look at: queue.git perf/resched
> >
> > I'll try and post it all tomorrow.
> 
> Thanks for doing this.  But some of my tests are still failing.
> I'm seeing some system-wide events are not counted.
> Let me take a deeper look at it.

Does this help? What would be an easy reproducer?

---
diff --git a/kernel/events/core.c b/kernel/events/core.c
index c67fc43fe877..4a04611333d9 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -179,23 +179,27 @@ static void perf_ctx_lock(struct perf_cpu_context *cpuctx,
 	}
 }
 
+static inline void __perf_ctx_unlock(struct perf_event_context *ctx)
+{
+	/*
+	 * If ctx_sched_in() didn't again set any ALL flags, clean up
+	 * after ctx_sched_out() by clearing is_active.
+	 */
+	if (ctx->is_active & EVENT_FROZEN) {
+		if (!(ctx->is_active & EVENT_ALL))
+			ctx->is_active = 0;
+		else
+			ctx->is_active &= ~EVENT_FROZEN;
+	}
+	raw_spin_unlock(&ctx->lock);
+}
+
 static void perf_ctx_unlock(struct perf_cpu_context *cpuctx,
 			    struct perf_event_context *ctx)
 {
-	if (ctx) {
-		/*
-		 * If ctx_sched_in() didn't again set any ALL flags, clean up
-		 * after ctx_sched_out() by clearing is_active.
-		 */
-		if (ctx->is_active & EVENT_FROZEN) {
-			if (!(ctx->is_active & EVENT_ALL))
-				ctx->is_active = 0;
-			else
-				ctx->is_active &= ~EVENT_FROZEN;
-		}
-		raw_spin_unlock(&ctx->lock);
-	}
-	raw_spin_unlock(&cpuctx->ctx.lock);
+	if (ctx)
+		__perf_ctx_unlock(ctx);
+	__perf_ctx_unlock(&cpuctx->ctx.lock);
 }
 
 #define TASK_TOMBSTONE ((void *)-1L)

  reply	other threads:[~2024-08-06  7:56 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-31  0:06 [PATCH v2] perf/core: Optimize event reschedule for a PMU Namhyung Kim
2024-08-02 17:39 ` Namhyung Kim
2024-08-02 18:30 ` Liang, Kan
2024-08-02 18:38   ` Peter Zijlstra
2024-08-02 18:43     ` Peter Zijlstra
2024-08-02 18:50       ` Peter Zijlstra
2024-08-02 19:11         ` Peter Zijlstra
2024-08-02 19:31           ` Liang, Kan
2024-08-02 19:32           ` Namhyung Kim
2024-08-03 10:32           ` Peter Zijlstra
2024-08-03 17:08             ` Namhyung Kim
2024-08-05  6:39               ` Namhyung Kim
2024-08-05  9:15                 ` Peter Zijlstra
2024-08-05  9:05               ` Peter Zijlstra
2024-08-05  9:20   ` Peter Zijlstra
2024-08-05 14:58     ` Peter Zijlstra
2024-08-06  6:19       ` Namhyung Kim
2024-08-06  7:56         ` Peter Zijlstra [this message]
2024-08-06  8:07           ` Peter Zijlstra
2024-08-06 19:29             ` Namhyung Kim
2024-08-06 13:54       ` Liang, Kan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240806075630.GL37996@noisy.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=eranian@google.com \
    --cc=irogers@google.com \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@kernel.org \
    --cc=mizhang@google.com \
    --cc=namhyung@kernel.org \
    --cc=ravi.bangoria@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox