Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Francois Dugast <francois.dugast@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <thomas.hellstrom@linux.intel.com>
Subject: Re: [RFC v1 3/9] drm/xe/hw_engine_group: Register hw engine group's exec queues
Date: Mon, 22 Jul 2024 17:47:40 +0000	[thread overview]
Message-ID: <Zp6bPEteuu+Jl9OO@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <Zp4Y9ZHA51INVtr1@fdugast-desk>

On Mon, Jul 22, 2024 at 10:31:49AM +0200, Francois Dugast wrote:
> On Wed, Jul 17, 2024 at 11:19:13PM +0000, Matthew Brost wrote:
> > On Wed, Jul 17, 2024 at 03:07:24PM +0200, Francois Dugast wrote:
> > > Add helpers to safely add and delete the exec queues attached to a hw
> > > engine group, and make use them at the time of creation and destruction
> > > of the exec queues. Keeping track of them is required to control the
> > > execution mode of the hw engine group.
> > > 
> > 
> > Ugh, missed one thing.
> > 
> > > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_exec_queue.c |  7 +++++
> > >  drivers/gpu/drm/xe/xe_hw_engine.c  | 45 ++++++++++++++++++++++++++++++
> > >  drivers/gpu/drm/xe/xe_hw_engine.h  |  4 +++
> > >  3 files changed, 56 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > index 0ba37835849b..645423a63434 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > @@ -192,6 +192,9 @@ void xe_exec_queue_destroy(struct kref *ref)
> > >  			xe_exec_queue_put(eq);
> > >  	}
> > >  
> > > +	if (q->vm)
> > > +		xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
> > > +
> > >  	q->ops->fini(q);
> > >  }
> > >  
> > > @@ -640,6 +643,10 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
> > >  			if (XE_IOCTL_DBG(xe, err))
> > >  				goto put_exec_queue;
> > >  		}
> > > +
> > > +		err = xe_hw_engine_group_add_exec_queue(q->hwe->hw_engine_group, q);
> > > +		if (err)
> > > +			goto put_exec_queue;
> > >  	}
> > >  
> > >  	mutex_lock(&xef->exec_queue.lock);
> > > diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
> > > index f8df85d25617..4dcc885a55c8 100644
> > > --- a/drivers/gpu/drm/xe/xe_hw_engine.c
> > > +++ b/drivers/gpu/drm/xe/xe_hw_engine.c
> > > @@ -1178,3 +1178,48 @@ u64 xe_hw_engine_read_timestamp(struct xe_hw_engine *hwe)
> > >  {
> > >  	return xe_mmio_read64_2x32(hwe->gt, RING_TIMESTAMP(hwe->mmio_base));
> > >  }
> > > +
> > > +/**
> > > + * xe_hw_engine_group_add_exec_queue() - Add an exec queue to a hw engine group
> > > + * @group: The hw engine group
> > > + * @q: The exec_queue
> > > + *
> > > + * Return: 0 on success,
> > > + *	    -EINTR if the lock could not be acquired
> > > + */
> > > +int xe_hw_engine_group_add_exec_queue(struct xe_hw_engine_group *group, struct xe_exec_queue *q)
> > > +{
> > > +	int err;
> > > +
> > > +	err = down_write_killable(&group->mode_sem);
> > > +	if (err)
> > > +		return err;
> > > +
> > 
> > You need to ensure exec queue in fault mode as in the suspend state if
> > the group is in dma_fence mode:
> > 
> > if (xe_vm_in_fault_mode(q->vm) && group->cur_mode == EXEC_MODE_DMA_FENCE) {
> > 	q->suspend();
> > 	q->suspend_wait();
> > 	queue_work(resume);
> > }
> 
> This happens later in the series when switching execution mode to EXEC_MODE_DMA_FENCE
> at the time of job submission. But this function here is called at the time of exec
> queue creation, am I missing a path where the exec queue would be executing before
> a job submission happens?
> 

Oh, yea I guess if always switch modes in the exec IOCTL is this
probably wouldn't be needed. It would be needed resume worker is just
kicked in the exec IOCTL.

Conceptually I think it probably best to make all the queues in the
group in the same state though. Having mixed state just seems like a
good way to get into trouble.

Matt

> Francois
> 
> > 
> > Matt
> > 
> > > +	list_add(&q->hw_engine_group_link, &group->exec_queue_list);
> > > +	up_write(&group->mode_sem);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +/**
> > > + * xe_hw_engine_group_del_exec_queue() - Delete an exec queue from a hw engine group
> > > + * @group: The hw engine group
> > > + * @q: The exec_queue
> > > + *
> > > + * Return: 0 on success,
> > > + *	    -EINTR if the lock could not be acquired
> > > + */
> > > +int xe_hw_engine_group_del_exec_queue(struct xe_hw_engine_group *group, struct xe_exec_queue *q)
> > > +{
> > > +	int err;
> > > +
> > > +	err = down_write_killable(&group->mode_sem);
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	if (!list_empty(&group->exec_queue_list))
> > > +		list_del(&q->hw_engine_group_link);
> > > +	up_write(&group->mode_sem);
> > > +
> > > +	return 0;
> > > +}
> > > diff --git a/drivers/gpu/drm/xe/xe_hw_engine.h b/drivers/gpu/drm/xe/xe_hw_engine.h
> > > index 7f2d27c0ba1a..ce59d83a75ad 100644
> > > --- a/drivers/gpu/drm/xe/xe_hw_engine.h
> > > +++ b/drivers/gpu/drm/xe/xe_hw_engine.h
> > > @@ -9,6 +9,7 @@
> > >  #include "xe_hw_engine_types.h"
> > >  
> > >  struct drm_printer;
> > > +struct xe_exec_queue;
> > >  
> > >  #ifdef CONFIG_DRM_XE_JOB_TIMEOUT_MIN
> > >  #define XE_HW_ENGINE_JOB_TIMEOUT_MIN CONFIG_DRM_XE_JOB_TIMEOUT_MIN
> > > @@ -70,4 +71,7 @@ static inline bool xe_hw_engine_is_valid(struct xe_hw_engine *hwe)
> > >  const char *xe_hw_engine_class_to_str(enum xe_engine_class class);
> > >  u64 xe_hw_engine_read_timestamp(struct xe_hw_engine *hwe);
> > >  
> > > +int xe_hw_engine_group_add_exec_queue(struct xe_hw_engine_group *group, struct xe_exec_queue *q);
> > > +int xe_hw_engine_group_del_exec_queue(struct xe_hw_engine_group *group, struct xe_exec_queue *q);
> > > +
> > >  #endif
> > > -- 
> > > 2.43.0
> > > 

  reply	other threads:[~2024-07-22 17:48 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-17 13:07 [RFC v1 0/9] Parallel submission of dma fence jobs and LR jobs with shared hardware resources Francois Dugast
2024-07-17 13:07 ` [RFC v1 1/9] drm/xe/hw_engine_group: Introduce xe_hw_engine_group Francois Dugast
2024-07-17 19:29   ` Matthew Brost
2024-07-22  7:40     ` Francois Dugast
2024-07-17 13:07 ` [RFC v1 2/9] drm/xe/exec_queue: Add list link for the hw engine group Francois Dugast
2024-07-17 19:31   ` Matthew Brost
2024-07-17 13:07 ` [RFC v1 3/9] drm/xe/hw_engine_group: Register hw engine group's exec queues Francois Dugast
2024-07-17 19:38   ` Matthew Brost
2024-07-17 19:42   ` Matthew Brost
2024-07-17 20:09   ` Matthew Brost
2024-07-22  8:17     ` Francois Dugast
2024-07-22 17:50       ` Matthew Brost
2024-08-13 12:24     ` Thomas Hellström
2024-08-15 14:55       ` Matthew Brost
2024-07-17 23:19   ` Matthew Brost
2024-07-22  8:31     ` Francois Dugast
2024-07-22 17:47       ` Matthew Brost [this message]
2024-07-17 13:07 ` [RFC v1 4/9] drm/xe/hw_engine_group: Add helper to suspend LR jobs Francois Dugast
2024-07-17 19:49   ` Matthew Brost
2024-07-17 23:09   ` Matthew Brost
2024-07-17 13:07 ` [RFC v1 5/9] drm/xe/hw_engine_group: Add helper to wait for dma fence jobs Francois Dugast
2024-07-17 20:18   ` Matthew Brost
2024-07-17 13:07 ` [RFC v1 6/9] drm/xe/hw_engine_group: Ensure safe transition between execution modes Francois Dugast
2024-07-17 22:54   ` Matthew Brost
2024-07-17 13:07 ` [RFC v1 7/9] drm/xe/exec: Switch hw engine group execution mode upon job submission Francois Dugast
2024-07-17 22:57   ` Matthew Brost
2024-07-18  2:09     ` Matthew Brost
2024-07-17 13:07 ` [RFC v1 8/9] drm/xe/hw_engine_group: Resume LR exec queues suspended by dma fence jobs Francois Dugast
2024-07-17 23:03   ` Matthew Brost
2024-07-17 13:07 ` [RFC v1 9/9] drm/xe/vm: Remove restriction that all VMs must be faulting if one is Francois Dugast
2024-07-17 23:05   ` Matthew Brost
2024-07-17 13:15 ` ✗ CI.Patch_applied: failure for Parallel submission of dma fence jobs and LR jobs with shared hardware resources Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zp6bPEteuu+Jl9OO@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=francois.dugast@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox