From: "Vivekanandan, Balasubramani" <balasubramani.vivekanandan@intel.com>
To: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>,
Tvrtko Ursulin <tursulin@ursulin.net>
Subject: Re: [PATCH 2/7] drm/xe: Add helper to capture context runtime
Date: Tue, 16 Apr 2024 21:15:16 +0530 [thread overview]
Message-ID: <Zh6dDMMFsBMLPIbK@bvivekan-mobl2> (raw)
In-Reply-To: <5yckw2zkfkdzlv6gjx77tmrtfpqrxe7yoywtgvbdzslnamyfha@lrfctmrwcqn5>
On 16.04.2024 08:42, Lucas De Marchi wrote:
> On Tue, Apr 16, 2024 at 10:56:13AM +0530, Vivekanandan, Balasubramani wrote:
> > On 15.04.2024 20:04, Lucas De Marchi wrote:
> > > From: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> > >
> > > Add a helper to update the runtime of an exec_queue accumulate it at 2
> > > places:
> > >
> > > 1. when the exec_queue is destroyed
> > > 2. when the sched job is completed
> > >
> > > Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> > > Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_device_types.h | 9 +++++++
> > > drivers/gpu/drm/xe/xe_exec_queue.c | 37 ++++++++++++++++++++++++++++
> > > drivers/gpu/drm/xe/xe_exec_queue.h | 1 +
> > > drivers/gpu/drm/xe/xe_sched_job.c | 2 ++
> > > 4 files changed, 49 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
> > > index 60ced5f90c2b..f6632b4d8399 100644
> > > --- a/drivers/gpu/drm/xe/xe_device_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_device_types.h
> > > @@ -553,6 +553,15 @@ struct xe_file {
> > > struct mutex lock;
> > > } exec_queue;
> > >
> > > + /**
> > > + * @runtime: hw engine class runtime in ticks for this drm client
> > > + *
> > > + * Only stats from xe_exec_queue->lrc[0] are accumulated. For multi-lrc
> > > + * case, since all jobs run in parallel on the engines, only the stats
> > > + * from lrc[0] are sufficient.
> > > + */
> > > + u64 runtime[XE_ENGINE_CLASS_MAX];
> > > +
> > > /** @client: drm client */
> > > struct xe_drm_client *client;
> > > };
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > index 71bd52dfebcf..c752d292fd33 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > @@ -214,6 +214,8 @@ void xe_exec_queue_fini(struct xe_exec_queue *q)
> > > {
> > > int i;
> > >
> > > + xe_exec_queue_update_runtime(q);
> > > +
> > > for (i = 0; i < q->width; ++i)
> > > xe_lrc_finish(q->lrc + i);
> > > if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && (q->flags & EXEC_QUEUE_FLAG_VM || !q->vm))
> > > @@ -769,6 +771,41 @@ bool xe_exec_queue_is_idle(struct xe_exec_queue *q)
> > > q->lrc[0].fence_ctx.next_seqno - 1;
> > > }
> > >
> > > +/**
> > > + * xe_exec_queue_update_runtime() - Update runtime for this exec queue from hw
> > > + * @q: The exec queue
> > > + *
> > > + * Update the timestamp saved by HW for this exec queue and save runtime
> > > + * calculated by using the delta from last update. On multi-lrc case, only the
> > > + * first is considered.
> > > + */
> > > +void xe_exec_queue_update_runtime(struct xe_exec_queue *q)
> > > +{
> > > + struct xe_file *xef;
> > > + struct xe_lrc *lrc;
> > > + u32 old_ts, new_ts;
> > > +
> > > + /*
> > > + * Jobs that are run during driver load may use an exec_queue, but are
> > > + * not associated with a user xe file, so avoid accumulating busyness
> > > + * for kernel specific work.
> > > + */
> > > + if (!q->vm || !q->vm->xef)
> > > + return;
> > > +
> > > + xef = q->vm->xef;
> > > + lrc = &q->lrc[0];
> > > +
> > > + new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
> > > +
> > > + /*
> > > + * Special case the very first timestamp: we don't want the
> > > + * initial delta to be a huge value
> > > + */
> > > + if (old_ts)
> > > + xef->runtime[q->class] += new_ts - old_ts;
> > What is the need for accumulating the delta instead of using the
> > absolute timestamp read from CTX_TIMESTAMP?
> > This would break if xe_lrc_update_timestamp() is called from some
> > additional places in future. The delta would be incorrect.
>
> can you clarify the breakage?
>
> - CTX_TIMESTAMP is per context (or exec_queue if you want to use the sw
> name)
> - Reported runtime is per client.
> - any update to xef->runtime[] should only ever be done by
> xe_lrc_update_timestamp()
>
> Anytime xe_lrc_update_timestamp() is called, it updates the timestamp,
> saves the new one in the lrc, and updates the delta in the xef. The
Everytime xe_lrc_update_timestamp() is invoked, it reads the
CTX_TIMESTAMP and caches it in lrc. The value in lrc is updated on every
invoke of xe_lrc_update_timestamp().
xe_exec_queue_update_runtime() is using the delta between the new value
from CTX_TIMESTAMP and the value stored in lrc. In this series,
xe_lrc_update_timestamp() is called only from
xe_exec_queue_update_runtime(). But there is nothing blocking from
xe_lrc_update_timestamp() being called from some other place in future.
In that case, the cached value in lrc would be updated by that
invocation outside of xe_exec_queue_update_runtime(). So the next time
xe_exec_queue_update_runtime() calls xe_lrc_update_timestamp(), it would
have lost a update.
> value in xef is the **runtime** for all the exec_queues created by that
> client, per engine class.
>
> Note that we already call it from multiple places with this patch
No, xe_lrc_update_timestamp() is called only from
xe_exec_queue_update_runtime(). But it is xe_exec_queue_update_runtime()
which is called from multiple places.
Regards,
Bala
> series:
>
> 1. when the exec_queue is destroyed
> 2. when the sched job is completed
> 3. when userspace queries the runtime
>
> ... so I don't think I understood what would break.
>
> Lucas De Marchi
next prev parent reply other threads:[~2024-04-16 15:45 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-16 3:04 [PATCH 0/7] drm/xe: Per client usage Lucas De Marchi
2024-04-16 3:04 ` [PATCH 1/7] drm/xe/lrc: Add helper to capture context timestamp Lucas De Marchi
2024-04-16 3:04 ` [PATCH 2/7] drm/xe: Add helper to capture context runtime Lucas De Marchi
2024-04-16 5:26 ` Vivekanandan, Balasubramani
2024-04-16 13:42 ` Lucas De Marchi
2024-04-16 15:45 ` Vivekanandan, Balasubramani [this message]
2024-04-16 15:53 ` Lucas De Marchi
2024-04-16 3:04 ` [PATCH 3/7] drm/xe: Promote xe_hw_engine_class_to_str() Lucas De Marchi
2024-04-16 9:36 ` Nirmoy Das
2024-04-19 18:36 ` Zeng, Oak
2024-04-16 3:04 ` [PATCH 4/7] drm/xe: Add XE_ENGINE_CLASS_OTHER to str conversion Lucas De Marchi
2024-04-16 9:37 ` Nirmoy Das
2024-04-16 3:04 ` [PATCH 5/7] drm/xe: Add helper to capture engine timestamp Lucas De Marchi
2024-04-16 22:56 ` Umesh Nerlige Ramappa
2024-04-17 3:14 ` Lucas De Marchi
2024-04-18 18:24 ` Umesh Nerlige Ramappa
2024-04-16 3:04 ` [PATCH 6/7] drm/xe/client: Print runtime to fdinfo Lucas De Marchi
2024-04-16 23:20 ` Umesh Nerlige Ramappa
2024-04-17 3:11 ` Lucas De Marchi
2024-04-18 23:12 ` Umesh Nerlige Ramappa
2024-04-19 13:25 ` Lucas De Marchi
2024-04-16 3:04 ` [PATCH 7/7] HACK: simple gputop-like impl in python Lucas De Marchi
2024-04-16 3:17 ` ✓ CI.Patch_applied: success for drm/xe: Per client usage Patchwork
2024-04-16 3:17 ` ✗ CI.checkpatch: warning " Patchwork
2024-04-16 3:18 ` ✓ CI.KUnit: success " Patchwork
2024-04-16 3:30 ` ✓ CI.Build: " Patchwork
2024-04-16 3:32 ` ✓ CI.Hooks: " Patchwork
2024-04-16 3:34 ` ✓ CI.checksparse: " Patchwork
2024-04-16 3:59 ` ✗ CI.BAT: failure " Patchwork
2024-04-16 8:37 ` [PATCH 0/7] " Tvrtko Ursulin
2024-04-16 13:30 ` Lucas De Marchi
2024-04-16 13:51 ` Lucas De Marchi
2024-04-16 14:22 ` Tvrtko Ursulin
2024-04-16 18:29 ` Lucas De Marchi
2024-04-17 8:51 ` Tvrtko Ursulin
2024-04-17 19:05 ` Lucas De Marchi
2024-04-17 20:35 ` Umesh Nerlige Ramappa
2024-04-17 23:19 ` Lucas De Marchi
2024-04-18 8:09 ` Tvrtko Ursulin
2024-04-19 10:44 ` Tvrtko Ursulin
2024-04-19 23:51 ` Umesh Nerlige Ramappa
2024-04-22 10:40 ` Tvrtko Ursulin
2024-04-22 17:17 ` Umesh Nerlige Ramappa
2024-04-23 8:44 ` Tvrtko Ursulin
2024-04-24 0:40 ` Lucas De Marchi
2024-04-16 22:12 ` ✗ CI.FULL: failure for " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zh6dDMMFsBMLPIbK@bvivekan-mobl2 \
--to=balasubramani.vivekanandan@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=lucas.demarchi@intel.com \
--cc=tursulin@ursulin.net \
--cc=umesh.nerlige.ramappa@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox