Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Welty, Brian" <brian.welty@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org
Subject: Re: [PATCH v2 3/5] drm/xe: Add exec_queue.sched_props.priority
Date: Thu, 4 Jan 2024 11:03:44 -0800	[thread overview]
Message-ID: <0cf6e9d0-a3fa-44f0-98e7-5cedb882ebe8@intel.com> (raw)
In-Reply-To: <ZZbg+5FsDPhvGzn+@DUT025-TGLU.fm.intel.com>



On 1/4/2024 8:46 AM, Matthew Brost wrote:
> On Wed, Jan 03, 2024 at 10:44:06AM -0800, Brian Welty wrote:
>> The purpose here is to allow to optimize exec_queue_set_priority()
>> in follow-on patch.  Currently it does q->ops->set_priority(...).
>> But we'd like to apply exec_queue_user_extensions much earlier and
>> q->ops cannot be called before __xe_exec_queue_init().
>>
>> It will be much more efficient to instead only have to set
>> q->sched_props.priority when applying user extensions. That value will
>> then already be set to the user requested value. So the setting of
>> default value is moved from q->ops->init() to __xe_exec_queue_alloc.
>>
>> v2: fix existing bug such that q->sched_props.priority is now set
>>      before guc_exec_queue_add_msg() (Matt)
>>      fix existing bug in that xe_migrate_init() should use exec_queue's
>>      vfunc for updating priority.
>>
>> Signed-off-by: Brian Welty <brian.welty@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_exec_queue.c       | 1 +
>>   drivers/gpu/drm/xe/xe_exec_queue_types.h | 4 ++--
>>   drivers/gpu/drm/xe/xe_guc_submit.c       | 7 +++----
>>   drivers/gpu/drm/xe/xe_migrate.c          | 2 +-
>>   4 files changed, 7 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
>> index e78b13845417..9891cddba71c 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> @@ -67,6 +67,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
>>   				hwe->eclass->sched_props.preempt_timeout_us;
>>   	q->sched_props.job_timeout_ms =
>>   				hwe->eclass->sched_props.job_timeout_ms;
>> +	q->sched_props.priority = XE_EXEC_QUEUE_PRIORITY_NORMAL;
>>   
>>   	if (xe_exec_queue_is_parallel(q)) {
>>   		q->parallel.composite_fence_ctx = dma_fence_context_alloc(1);
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> index 882eb5373980..6ae4f4e2ddca 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> @@ -52,8 +52,6 @@ struct xe_exec_queue {
>>   	struct xe_vm *vm;
>>   	/** @class: class of this exec queue */
>>   	enum xe_engine_class class;
>> -	/** @priority: priority of this exec queue */
>> -	enum xe_exec_queue_priority priority;
>>   	/**
>>   	 * @logical_mask: logical mask of where job submitted to exec queue can run
>>   	 */
>> @@ -144,6 +142,8 @@ struct xe_exec_queue {
>>   		u32 preempt_timeout_us;
>>   		/** @job_timeout_ms: job timeout in milliseconds */
>>   		u32 job_timeout_ms;
>> +		/** @priority: priority of this exec queue */
>> +		enum xe_exec_queue_priority priority;
>>   	} sched_props;
>>   
>>   	/** @compute: compute exec queue state */
>> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
>> index 56c0a7bf554f..392cbde62957 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
>> @@ -421,7 +421,7 @@ static void init_policies(struct xe_guc *guc, struct xe_exec_queue *q)
>>   {
>>   	struct exec_queue_policy policy;
>>   	struct xe_device *xe = guc_to_xe(guc);
>> -	enum xe_exec_queue_priority prio = q->priority;
>> +	enum xe_exec_queue_priority prio = q->sched_props.priority;
>>   	u32 timeslice_us = q->sched_props.timeslice_us;
>>   	u32 preempt_timeout_us = q->sched_props.preempt_timeout_us;
>>   
>> @@ -1231,7 +1231,6 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
>>   	err = xe_sched_entity_init(&ge->entity, sched);
>>   	if (err)
>>   		goto err_sched;
>> -	q->priority = XE_EXEC_QUEUE_PRIORITY_NORMAL;
>>   
>>   	if (xe_exec_queue_is_lr(q))
>>   		INIT_WORK(&q->guc->lr_tdr, xe_guc_exec_queue_lr_cleanup);
>> @@ -1301,15 +1300,15 @@ static int guc_exec_queue_set_priority(struct xe_exec_queue *q,
>>   {
>>   	struct xe_sched_msg *msg;
>>   
>> -	if (q->priority == priority || exec_queue_killed_or_banned(q))
>> +	if (q->sched_props.priority == priority || exec_queue_killed_or_banned(q))
>>   		return 0;
>>   
>>   	msg = kmalloc(sizeof(*msg), GFP_KERNEL);
>>   	if (!msg)
>>   		return -ENOMEM;
>>   
>> +	q->sched_props.priority = priority;
>>   	guc_exec_queue_add_msg(q, msg, SET_SCHED_PROPS);
>> -	q->priority = priority;
> 
> This probably should be its own patch and maybe with a Fixes tag too.
> 
>>   
>>   	return 0;
>>   }
>> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
>> index adf1dab5eba2..f967fa69769e 100644
>> --- a/drivers/gpu/drm/xe/xe_migrate.c
>> +++ b/drivers/gpu/drm/xe/xe_migrate.c
>> @@ -356,7 +356,7 @@ struct xe_migrate *xe_migrate_init(struct xe_tile *tile)
>>   		return ERR_CAST(m->q);
>>   	}
>>   	if (xe->info.has_usm)
>> -		m->q->priority = XE_EXEC_QUEUE_PRIORITY_KERNEL;
>> +		m->q->ops->set_priority(m->q, XE_EXEC_QUEUE_PRIORITY_KERNEL);
> 
> Same here.

Okay, will pursue....   but then I guess I'd rather pass a new flag into 
xe_exec_queue_create() and just set the priority upfront rather than use
the vfunc to modify it here later.  Will code it up and see how it 
looks.  Then don't have to deal with the vfunc possibly failing.

Would it be wrong to simply always use XE_EXEC_QUEUE_PRIORITY_KERNEL
for exec_queues created with flags:
	EXEC_QUEUE_FLAG_KERNEL | EXEC_QUEUE_FLAG_PERMANENT
??


> 
> Otherwise LGTM.
> 
> Matt
> 
>>   
>>   	mutex_init(&m->job_mutex);
>>   
>> -- 
>> 2.43.0
>>

  reply	other threads:[~2024-01-04 19:03 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-03 18:44 [PATCH v2 0/5] Refactor xe_exec_queue_create to fix user extensions Brian Welty
2024-01-03 18:44 ` [PATCH v2 1/5] drm/xe: Refactor __xe_exec_queue_create() Brian Welty
2024-01-03 18:44 ` [PATCH v2 2/5] drm/xe: Add exec_queue.sched_props.job_timeout_ms Brian Welty
2024-01-04 16:44   ` Matthew Brost
2024-01-03 18:44 ` [PATCH v2 3/5] drm/xe: Add exec_queue.sched_props.priority Brian Welty
2024-01-04 16:46   ` Matthew Brost
2024-01-04 19:03     ` Welty, Brian [this message]
2024-01-04 19:14       ` Matthew Brost
2024-01-03 18:44 ` [PATCH v2 4/5] drm/xe: Finish refactoring of exec_queue_create Brian Welty
2024-01-04 16:54   ` Matthew Brost
2024-01-03 18:44 ` [PATCH v2 5/5] drm/xe: Remove set_job_timeout_ms() from exec_queue_ops Brian Welty
2024-01-04 17:07   ` Matthew Brost
2024-01-04 15:42 ` ✓ CI.Patch_applied: success for Refactor xe_exec_queue_create to fix user extensions (rev2) Patchwork
2024-01-04 15:43 ` ✓ CI.checkpatch: " Patchwork
2024-01-04 15:44 ` ✓ CI.KUnit: " Patchwork
2024-01-04 15:51 ` ✓ CI.Build: " Patchwork
2024-01-04 15:52 ` ✓ CI.Hooks: " Patchwork
2024-01-04 15:53 ` ✓ CI.checksparse: " Patchwork
2024-01-04 16:42 ` ✓ CI.BAT: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0cf6e9d0-a3fa-44f0-98e7-5cedb882ebe8@intel.com \
    --to=brian.welty@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox