Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	Dan Carpenter <dan.carpenter@linaro.org>,
	John Harrison <John.C.Harrison@intel.com>
Subject: Re: [PATCH] drm/xe/pxp: Don't kill queues while holding the spinlock
Date: Wed, 12 Feb 2025 17:26:55 -0800	[thread overview]
Message-ID: <Z61KX8koy/aFnvOy@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20250213004032.2059861-1-daniele.ceraolospurio@intel.com>

On Wed, Feb 12, 2025 at 04:40:32PM -0800, Daniele Ceraolo Spurio wrote:
> xe_exec_queue_kill can sleep, so we can't call it from under the lock.
> We can instead move the queues to a separate list and then kill them all
> after we release the lock.
> 
> Since being in the list is used to track whether RPM cleanup is needed,
> we can no longer defer that to queue_destroy, so we perform it
> immediately instead.
> 
> Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
> Fixes: f8caa80154c4 ("drm/xe/pxp: Add PXP queue tracking and session start")
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

Patch LGTM but can this actually happen though? i.e. Can or do we enable
PXP on LR queues?

Also as a follow should be add a might_sleep() to xe_exec_queue_kill to
catch this type of bug immediately?

Anyways patch does LGTM:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> Cc: John Harrison <John.C.Harrison@Intel.com>
> ---
>  drivers/gpu/drm/xe/xe_pxp.c | 58 ++++++++++++++++++++++---------------
>  1 file changed, 34 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index 3cd3f83e86b0..bca2872ba07a 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -665,23 +665,15 @@ int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
>  	return ret;
>  }
>  
> -/**
> - * xe_pxp_exec_queue_remove - remove a queue from the PXP list
> - * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> - * @q: the queue to remove from the list
> - *
> - * If PXP is enabled and the exec_queue is in the list, the queue will be
> - * removed from the list and its PM reference will be released. It is safe to
> - * call this function multiple times for the same queue.
> - */
> -void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q)
> +static void __pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q, bool lock)
>  {
>  	bool need_pm_put = false;
>  
>  	if (!xe_pxp_is_enabled(pxp))
>  		return;
>  
> -	spin_lock_irq(&pxp->queues.lock);
> +	if (lock)
> +		spin_lock_irq(&pxp->queues.lock);
>  
>  	if (!list_empty(&q->pxp.link)) {
>  		list_del_init(&q->pxp.link);
> @@ -690,36 +682,54 @@ void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q)
>  
>  	q->pxp.type = DRM_XE_PXP_TYPE_NONE;
>  
> -	spin_unlock_irq(&pxp->queues.lock);
> +	if (lock)
> +		spin_unlock_irq(&pxp->queues.lock);
>  
>  	if (need_pm_put)
>  		xe_pm_runtime_put(pxp->xe);
>  }
>  
> +/**
> + * xe_pxp_exec_queue_remove - remove a queue from the PXP list
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @q: the queue to remove from the list
> + *
> + * If PXP is enabled and the exec_queue is in the list, the queue will be
> + * removed from the list and its PM reference will be released. It is safe to
> + * call this function multiple times for the same queue.
> + */
> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q)
> +{
> +	__pxp_exec_queue_remove(pxp, q, true);
> +}
> +
>  static void pxp_invalidate_queues(struct xe_pxp *pxp)
>  {
>  	struct xe_exec_queue *tmp, *q;
> +	LIST_HEAD(to_clean);
>  
>  	spin_lock_irq(&pxp->queues.lock);
>  
> -	/*
> -	 * Removing a queue from the PXP list requires a put of the RPM ref that
> -	 * the queue holds to keep the PXP session alive, which can't be done
> -	 * under spinlock. Since it is safe to kill a queue multiple times, we
> -	 * can leave the invalid queue in the list for now and postpone the
> -	 * removal and associated RPM put to when the queue is destroyed.
> -	 */
> -	list_for_each_entry(tmp, &pxp->queues.list, pxp.link) {
> -		q = xe_exec_queue_get_unless_zero(tmp);
> -
> +	list_for_each_entry_safe(q, tmp, &pxp->queues.list, pxp.link) {
> +		q = xe_exec_queue_get_unless_zero(q);
>  		if (!q)
>  			continue;
>  
> +		list_move_tail(&q->pxp.link, &to_clean);
> +	}
> +	spin_unlock_irq(&pxp->queues.lock);
> +
> +	list_for_each_entry_safe(q, tmp, &to_clean, pxp.link) {
>  		xe_exec_queue_kill(q);
> +
> +		/*
> +		 * We hold a ref to the queue so there is no risk of racing with
> +		 * the calls to exec_queue_remove coming from exec_queue_destroy.
> +		 */
> +		__pxp_exec_queue_remove(pxp, q, false);
> +
>  		xe_exec_queue_put(q);
>  	}
> -
> -	spin_unlock_irq(&pxp->queues.lock);
>  }
>  
>  /**
> -- 
> 2.43.0
> 

  parent reply	other threads:[~2025-02-13  1:25 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-13  0:40 [PATCH] drm/xe/pxp: Don't kill queues while holding the spinlock Daniele Ceraolo Spurio
2025-02-13  0:47 ` ✓ CI.Patch_applied: success for " Patchwork
2025-02-13  0:47 ` ✗ CI.checkpatch: warning " Patchwork
2025-02-13  0:48 ` ✗ CI.KUnit: failure " Patchwork
2025-02-13  1:26 ` Matthew Brost [this message]
2025-02-13  6:42   ` [PATCH] " Dan Carpenter
2025-02-13 17:23     ` Daniele Ceraolo Spurio
2025-02-13 20:19       ` Matthew Brost
2025-02-19  0:38         ` Daniele Ceraolo Spurio
2025-02-19  3:18           ` Matthew Brost
2025-02-19  3:20             ` Matthew Brost
2025-02-19 21:33               ` Daniele Ceraolo Spurio

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z61KX8koy/aFnvOy@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=John.C.Harrison@intel.com \
    --cc=dan.carpenter@linaro.org \
    --cc=daniele.ceraolospurio@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox