From: "Dixit, Ashutosh" <ashutosh.dixit@intel.com>
To: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness
Date: Wed, 31 Aug 2022 13:25:11 -0700 [thread overview]
Message-ID: <87wnaorwig.wl-ashutosh.dixit@intel.com> (raw)
In-Reply-To: <Ywj1xF6Y+p6Ea5Qq@unerlige-ril>
On Fri, 26 Aug 2022 09:33:08 -0700, Umesh Nerlige Ramappa wrote:
>
Hi Umesh,
Just to communicate my thoughts I have posted this patch on top of your
patch:
[1] https://patchwork.freedesktop.org/series/107983/
Could you please take a look at that and see if it makes sense.
> On Thu, Aug 25, 2022 at 06:44:50PM -0700, Dixit, Ashutosh wrote:
> > On Thu, 04 Aug 2022 16:21:25 -0700, Umesh Nerlige Ramappa wrote:
> >
> > Hi Umesh, I am fairly new to this code so some questions will be below will
> > be newbie questions, thanks for bearing with me.
> >
> >> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
> >> index 654a092ed3d6..e2d70a9fdac0 100644
> >> --- a/drivers/gpu/drm/i915/gt/intel_context.c
> >> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
> >> @@ -576,16 +576,24 @@ void intel_context_bind_parent_child(struct intel_context *parent,
> >> child->parallel.parent = parent;
> >> }
> >>
> >> -u64 intel_context_get_total_runtime_ns(const struct intel_context *ce)
> >> +u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
> >> {
> >> u64 total, active;
> >>
> >> + if (ce->ops->update_stats)
> >> + ce->ops->update_stats(ce);
> >> +
> >> total = ce->stats.runtime.total;
> >> if (ce->ops->flags & COPS_RUNTIME_CYCLES)
> >> total *= ce->engine->gt->clock_period_ns;
> >>
> >> active = READ_ONCE(ce->stats.active);
> >> - if (active)
> >> + /*
> >> + * When COPS_RUNTIME_ACTIVE_TOTAL is set for ce->cops, the backend
> >> + * already provides the total active time of the context, so skip this
> >> + * calculation when this flag is set.
> >> + */
> >> + if (active && !(ce->ops->flags & COPS_RUNTIME_ACTIVE_TOTAL))
> >> active = intel_context_clock() - active;
> >>
> >> return total + active;
> >
> > /snip/
> >
> >> @@ -1396,6 +1399,10 @@ static void guc_timestamp_ping(struct work_struct *wrk)
> >> with_intel_runtime_pm(>->i915->runtime_pm, wakeref)
> >> __update_guc_busyness_stats(guc);
> >>
> >> + /* adjust context stats for overflow */
> >> + xa_for_each(&guc->context_lookup, index, ce)
> >> + __guc_context_update_clks(ce);
> >
> > What is the reason for calling __guc_context_update_clks() periodically
> > from guc_timestamp_ping() since it appears we should just be able to call
> > __guc_context_update_clks() from intel_context_get_total_runtime_ns() to
> > update 'active'? Is the reason for calling __guc_context_update_clks()
> > periodically that the calculations in __guc_context_update_clks() become
> > invalid if the counters overflow?
>
> Correct, these are 32-bit counters and the worker just tracks overflow.
OK.
>
> >
> >> +
> >> intel_gt_reset_unlock(gt, srcu);
> >>
> >> mod_delayed_work(system_highpri_wq, &guc->timestamp.work,
> >> @@ -1469,6 +1476,56 @@ void intel_guc_busyness_unpark(struct intel_gt *gt)
> >> guc->timestamp.ping_delay);
> >> }
> >>
> >> +static void __guc_context_update_clks(struct intel_context *ce)
> >> +{
> >> + struct intel_guc *guc = ce_to_guc(ce);
> >> + struct intel_gt *gt = ce->engine->gt;
> >> + u32 *pphwsp, last_switch, engine_id;
> >> + u64 start_gt_clk, active;
> >> + unsigned long flags;
> >> + ktime_t unused;
> >> +
> >> + spin_lock_irqsave(&guc->timestamp.lock, flags);
> >> +
> >> + /*
> >> + * GPU updates ce->lrc_reg_state[CTX_TIMESTAMP] when context is switched
> >> + * out, however GuC updates PPHWSP offsets below. Hence KMD (CPU)
> >> + * relies on GuC and GPU for busyness calculations. Due to this, A
> >> + * potential race was highlighted in an earlier review that can lead to
> >> + * double accounting of busyness. While the solution to this is a wip,
> >> + * busyness is still usable for platforms running GuC submission.
> >> + */
> >> + pphwsp = ((void *)ce->lrc_reg_state) - LRC_STATE_OFFSET;
> >> + last_switch = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_STAMP_LO]);
> >> + engine_id = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_ENGINE_ID]);
> >> +
> >> + guc_update_pm_timestamp(guc, &unused);
> >> +
> >> + if (engine_id != 0xffffffff && last_switch) {
> >> + start_gt_clk = READ_ONCE(ce->stats.runtime.start_gt_clk);
> >> + __extend_last_switch(guc, &start_gt_clk, last_switch);
> >> + active = intel_gt_clock_interval_to_ns(gt, guc->timestamp.gt_stamp - start_gt_clk);
> >> + WRITE_ONCE(ce->stats.runtime.start_gt_clk, start_gt_clk);
> >> + WRITE_ONCE(ce->stats.active, active);
> >
> > Should not need WRITE_ONCE to update regular memory. Not even sure we need
> > READ_ONCE above.
>
> Not sure I checked what they do. I was thinking these are needed for the
> memory ordering (as in be sure that start_gt_clk is updated before
> active).
As long as our operations are done under correct locks we don't have to
worry about memory ordering. That is one of the reasons I am doing
everything under the spinlock in [1].
>
> >
> >> + } else {
> >> + lrc_update_runtime(ce);
> >
> > As was being discussed, should not need this here in this function. See
> > below too.
>
> In short, I added this here so that a query for busyness following idle can
> be obtained immediately. For GuC backend, the context is unpinned after
> disabling scheduling on that context and that is asynchronous. Also if
> there are more requests on that context, the scheduling may not be disabled
> and unpin may not happen, so updated runtime would only be seen much much
> later.
>
> It is still safe to call from here because we know that the context is not
> active and has switched out. If it did switch in while we were reading
> this, that's still fine, we would only report the value stored in the
> context image.
Agreed, but in [1] I have made this unconditional, not sure if you will
agree or see problems with that.
>
> >
> >> + }
> >> +
> >> + spin_unlock_irqrestore(&guc->timestamp.lock, flags);
> >> +}
> >> +
> >> +static void guc_context_update_stats(struct intel_context *ce)
> >> +{
> >> + if (!intel_context_pin_if_active(ce)) {
> >> + WRITE_ONCE(ce->stats.runtime.start_gt_clk, 0);
> >> + WRITE_ONCE(ce->stats.active, 0);
> >
> > Why do these need to be initialized to 0? Looks like the calculations in
> > __guc_context_update_clks() will work even if we don't do this? Also I
> > didn't follow the 'if (!intel_context_pin_if_active(ce))' check.
>
> __guc_context_update_clks accesses the context image, so we need to make
> sure it's pinned. pin if active will not sleep/wait, so we can use it in
> this path.
I have added pinning in [1].
> if context is not active, then we update the active stats to 0.
In [1] active is just a local variable and I don't touch ce->stats.active
at all.
> >> + return;
> >> + }
> >> +
> >> + __guc_context_update_clks(ce);
> >> + intel_context_unpin(ce);
> >> +}
> >> +
> >> static inline bool
> >> submission_disabled(struct intel_guc *guc)
> >> {
> >> @@ -2723,6 +2780,7 @@ static void guc_context_unpin(struct intel_context *ce)
> >> {
> >> struct intel_guc *guc = cce_to_guc(ce);
> >>
> >> + lrc_update_runtime(ce);
> >
> > How about moving this into lrc_unpin() since that gets called from all guc
> > context types (parent/child/virtual).
>
> looks like lrc_unpin is called from context_unpin path.
>
> Same as above: for GuC, the context_unpin is an async operation and may not
> happen if there are multiple requests in queue.
In [1] I have left lrc_unpin in guc_context_unpin but changed to
lrc_update_runtime_locked.
Thanks.
--
Ashutosh
next prev parent reply other threads:[~2022-08-31 20:25 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-04 23:21 [Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness Umesh Nerlige Ramappa
2022-08-04 23:36 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for i915/pmu: Wire GuC backend to per-client busyness (rev4) Patchwork
2022-08-04 23:55 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-08-05 5:46 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2022-08-05 9:45 ` [Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness Tvrtko Ursulin
2022-08-05 15:18 ` Umesh Nerlige Ramappa
[not found] ` <87fshl3yw0.wl-ashutosh.dixit@intel.com>
2022-08-26 15:44 ` Umesh Nerlige Ramappa
2022-08-25 5:03 ` Dixit, Ashutosh
2022-08-25 21:12 ` Dixit, Ashutosh
2022-08-26 1:44 ` Dixit, Ashutosh
2022-08-26 16:33 ` Umesh Nerlige Ramappa
2022-08-31 20:25 ` Dixit, Ashutosh [this message]
2022-08-31 22:57 ` Umesh Nerlige Ramappa
-- strict thread matches above, loose matches on Subject: below --
2022-06-16 22:13 Nerlige Ramappa, Umesh
2022-06-17 8:00 ` Tvrtko Ursulin
2022-07-27 6:01 ` Umesh Nerlige Ramappa
2022-07-27 8:48 ` Tvrtko Ursulin
2022-08-01 19:02 ` Umesh Nerlige Ramappa
2022-08-02 8:41 ` Tvrtko Ursulin
2022-08-02 23:38 ` Umesh Nerlige Ramappa
2022-08-04 1:21 ` Umesh Nerlige Ramappa
2022-08-04 7:25 ` Tvrtko Ursulin
2022-06-16 19:08 Nerlige Ramappa, Umesh
2022-06-14 0:46 Nerlige Ramappa, Umesh
2022-06-14 13:30 ` Tvrtko Ursulin
2022-06-14 16:32 ` Umesh Nerlige Ramappa
2022-06-15 7:08 ` Tvrtko Ursulin
2022-06-15 17:42 ` Umesh Nerlige Ramappa
2022-08-25 6:18 ` Dixit, Ashutosh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87wnaorwig.wl-ashutosh.dixit@intel.com \
--to=ashutosh.dixit@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=umesh.nerlige.ramappa@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox