From: Francois Dugast <francois.dugast@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v6 10/13] drm/xe/guc_submit: Allow calling guc_exec_queue_resume with pending resume
Date: Thu, 8 Aug 2024 20:28:33 +0200 [thread overview]
Message-ID: <ZrUOUVJi34Gvv0mc@fdugast-desk> (raw)
In-Reply-To: <ZrQ/EU2pnJr0ElpW@DUT025-TGLU.fm.intel.com>
On Thu, Aug 08, 2024 at 03:44:17AM +0000, Matthew Brost wrote:
> On Tue, Aug 06, 2024 at 09:27:08PM +0200, Francois Dugast wrote:
> > Make it safe to have multiple calls to guc_exec_queue_resume() for the
> > same exec queue. This allows the use of this function from a context where
> > knowledge about prior calls is missing. In this case the function does
> > nothing but at least it does not fail on assert.
> >
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_guc_submit.c | 5 ++---
> > 1 file changed, 2 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> > index 50013e1e7455..782561b21d24 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > @@ -1623,10 +1623,9 @@ static int guc_exec_queue_suspend_wait(struct xe_exec_queue *q)
> > static void guc_exec_queue_resume(struct xe_exec_queue *q)
> > {
> > struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_RESUME;
> > - struct xe_guc *guc = exec_queue_to_guc(q);
> > - struct xe_device *xe = guc_to_xe(guc);
> >
> > - xe_assert(xe, !q->guc->suspend_pending);
>
> I think the existing assert is correct. We certainly don't want to call
> resume until suspend_wait returns (sets q->guc->suspend_pending ==
> false).
In the end this was a workaround for this issue [1] which is now fixed
so I will drop this patch in the next revision.
Francois
[1] https://patchwork.freedesktop.org/patch/606817/?series=136192&rev=5#comment_1103293
>
> I think if we want to make this safe to be called multiple times we need
> a new function 'guc_exec_queue_try_add_msg'. This function would check the
> msg->link link under lock for empty and if empty add the message. It is
> harmless for __guc_exec_queue_process_msg_resume to be called multple
> times, it is not harmless to use the list twice.
>
> Matt
>
> > + if (q->guc->suspend_pending)
> > + return;
> >
> > guc_exec_queue_add_msg(q, msg, RESUME);
> > }
> > --
> > 2.43.0
> >
next prev parent reply other threads:[~2024-08-08 18:28 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-06 19:26 [PATCH v6 00/13] Parallel submission of dma fence jobs and LR jobs with shared hardware resources Francois Dugast
2024-08-06 19:26 ` [PATCH v6 01/13] drm/xe/hw_engine_group: Introduce xe_hw_engine_group Francois Dugast
2024-08-06 19:27 ` [PATCH v6 02/13] drm/xe/guc_submit: Make suspend_wait interruptible Francois Dugast
2024-08-06 19:27 ` [PATCH v6 03/13] drm/xe/hw_engine_group: Register hw engine group's exec queues Francois Dugast
2024-08-06 19:27 ` [PATCH v6 04/13] drm/xe/hw_engine_group: Add helper to suspend faulting LR jobs Francois Dugast
2024-08-06 19:27 ` [PATCH v6 05/13] drm/xe/exec_queue: Remove duplicated code Francois Dugast
2024-08-06 19:27 ` [PATCH v6 06/13] drm/xe/exec_queue: Prepare last fence for hw engine group resume context Francois Dugast
2024-08-06 19:27 ` [PATCH v6 07/13] drm/xe/hw_engine_group: Add helper to wait for dma fence jobs Francois Dugast
2024-08-06 19:27 ` [PATCH v6 08/13] drm/xe/hw_engine_group: Ensure safe transition between execution modes Francois Dugast
2024-08-06 19:27 ` [PATCH v6 09/13] drm/xe/exec: Switch hw engine group execution mode upon job submission Francois Dugast
2024-08-06 19:27 ` [PATCH v6 10/13] drm/xe/guc_submit: Allow calling guc_exec_queue_resume with pending resume Francois Dugast
2024-08-08 3:44 ` Matthew Brost
2024-08-08 18:28 ` Francois Dugast [this message]
2024-08-06 19:27 ` [PATCH v6 11/13] drm/xe/hw_engine_group: Resume exec queues suspended by dma fence jobs Francois Dugast
2024-08-06 19:27 ` [PATCH v6 12/13] drm/xe/vm: Remove restriction that all VMs must be faulting if one is Francois Dugast
2024-08-06 19:27 ` [PATCH v6 13/13] drm/xe/device: Remove unused xe_device::usm::num_vm_in_* Francois Dugast
2024-08-06 19:36 ` ✓ CI.Patch_applied: success for Parallel submission of dma fence jobs and LR jobs with shared hardware resources (rev6) Patchwork
2024-08-06 19:36 ` ✗ CI.checkpatch: warning " Patchwork
2024-08-06 19:37 ` ✓ CI.KUnit: success " Patchwork
2024-08-06 19:49 ` ✓ CI.Build: " Patchwork
2024-08-06 19:51 ` ✓ CI.Hooks: " Patchwork
2024-08-06 19:53 ` ✓ CI.checksparse: " Patchwork
2024-08-06 23:51 ` ✗ CI.FULL: failure " Patchwork
2024-08-07 4:47 ` ✗ CI.BAT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZrUOUVJi34Gvv0mc@fdugast-desk \
--to=francois.dugast@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox