intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Luiz Otavio Mello <luiz.mello@estudante.ufscar.br>
Cc: <joonas.lahtinen@linux.intel.com>, <tursulin@ursulin.net>,
	<jani.nikula@linux.intel.com>, <airlied@gmail.com>,
	<simona@ffwll.ch>, <intel-gfx@lists.freedesktop.org>,
	<dri-devel@lists.freedesktop.org>, <mairacanal@riseup.net>
Subject: Re: [PATCH 5/9 v2] drm/i915/gem: Clean-up outdated struct_mutex comments
Date: Fri, 8 Aug 2025 10:38:47 -0400	[thread overview]
Message-ID: <aJYL92WDRSixoPMk@intel.com> (raw)
In-Reply-To: <20250807170212.285385-6-luiz.mello@estudante.ufscar.br>

On Thu, Aug 07, 2025 at 02:02:04PM -0300, Luiz Otavio Mello wrote:
> The struct_mutex will be removed from the DRM subsystem, as it was a
> legacy BKL that was only used by i915 driver. After review, it was
> concluded that its usage was no longer necessary
> 
> This patch updates various comments in the i915/gem and i915/gt
> codebase to either remove or clarify references to struct_mutex, in
> order to prevent future misunderstandings.
> 
> * i915_gem_execbuffer.c: Replace reference to struct_mutex with
>   vm->mutex, as noted in the eb_reserve() function, which states that
>   vm->mutex handles deadlocks.
> * i915_gem_object.c: Change struct_mutex by
>   drm_i915_gem_object->vma.lock. i915_gem_object_unbind() in i915_gem.c
>   states that this lock is who actually protects the unbind.
> * i915_gem_shrinker.c: The correct lock is actually i915->mm.obj, as
>   already documented in its declaration.
> * i915_gem_wait.c: The existing comment already mentioned that
>   struct_mutex was no longer necessary. Updated to refer to a generic
>   global lock instead.
> * intel_reset_types.h: Cleaned up the comment text. Updated to refer to
>   a generic global lock instead.
> 
> Signed-off-by: Luiz Otavio Mello <luiz.mello@estudante.ufscar.br>

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> ---
>  drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 +-
>  drivers/gpu/drm/i915/gem/i915_gem_object.c     | 4 ++--
>  drivers/gpu/drm/i915/gem/i915_gem_shrinker.c   | 4 ++--
>  drivers/gpu/drm/i915/gem/i915_gem_wait.c       | 8 ++++----
>  drivers/gpu/drm/i915/gt/intel_reset_types.h    | 2 +-
>  5 files changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index f243f8a5215d..39c7c32e1e74 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -182,7 +182,7 @@ enum {
>   * the object. Simple! ... The relocation entries are stored in user memory
>   * and so to access them we have to copy them into a local buffer. That copy
>   * has to avoid taking any pagefaults as they may lead back to a GEM object
> - * requiring the struct_mutex (i.e. recursive deadlock). So once again we split
> + * requiring the vm->mutex (i.e. recursive deadlock). So once again we split
>   * the relocation into multiple passes. First we try to do everything within an
>   * atomic context (avoid the pagefaults) which requires that we never wait. If
>   * we detect that we may wait, or if we need to fault, then we have to fallback
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> index 1f38e367c60b..478011e5ecb3 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> @@ -459,8 +459,8 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
>  	atomic_inc(&i915->mm.free_count);
>  
>  	/*
> -	 * Since we require blocking on struct_mutex to unbind the freed
> -	 * object from the GPU before releasing resources back to the
> +	 * Since we require blocking on drm_i915_gem_object->vma.lock to unbind
> +	 * the freed object from the GPU before releasing resources back to the
>  	 * system, we can not do that directly from the RCU callback (which may
>  	 * be a softirq context), but must instead then defer that work onto a
>  	 * kthread. We use the RCU callback rather than move the freed object
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> index b81e67504bbe..7a3e74a6676e 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> @@ -170,7 +170,7 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww,
>  	 * Also note that although these lists do not hold a reference to
>  	 * the object we can safely grab one here: The final object
>  	 * unreferencing and the bound_list are both protected by the
> -	 * dev->struct_mutex and so we won't ever be able to observe an
> +	 * i915->mm.obj_lock and so we won't ever be able to observe an
>  	 * object on the bound_list with a reference count equals 0.
>  	 */
>  	for (phase = phases; phase->list; phase++) {
> @@ -185,7 +185,7 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww,
>  
>  		/*
>  		 * We serialize our access to unreferenced objects through
> -		 * the use of the struct_mutex. While the objects are not
> +		 * the use of the obj_lock. While the objects are not
>  		 * yet freed (due to RCU then a workqueue) we still want
>  		 * to be able to shrink their pages, so they remain on
>  		 * the unbound/bound list until actually freed.
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> index 991666fd9f85..54829801d3f7 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> @@ -217,10 +217,10 @@ static unsigned long to_wait_timeout(s64 timeout_ns)
>   *
>   * The wait ioctl with a timeout of 0 reimplements the busy ioctl. With any
>   * non-zero timeout parameter the wait ioctl will wait for the given number of
> - * nanoseconds on an object becoming unbusy. Since the wait itself does so
> - * without holding struct_mutex the object may become re-busied before this
> - * function completes. A similar but shorter * race condition exists in the busy
> - * ioctl
> + * nanoseconds on an object becoming unbusy. Since the wait occurs without
> + * holding a global or exclusive lock the object may become re-busied before
> + * this function completes. A similar but shorter * race condition exists
> + * in the busy ioctl
>   */
>  int
>  i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> diff --git a/drivers/gpu/drm/i915/gt/intel_reset_types.h b/drivers/gpu/drm/i915/gt/intel_reset_types.h
> index 4f5fd393af6f..ee4eb574a219 100644
> --- a/drivers/gpu/drm/i915/gt/intel_reset_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_reset_types.h
> @@ -20,7 +20,7 @@ struct intel_reset {
>  	 * FENCE registers).
>  	 *
>  	 * #I915_RESET_ENGINE[num_engines] - Since the driver doesn't need to
> -	 * acquire the struct_mutex to reset an engine, we need an explicit
> +	 * acquire a global lock to reset an engine, we need an explicit
>  	 * flag to prevent two concurrent reset attempts in the same engine.
>  	 * As the number of engines continues to grow, allocate the flags from
>  	 * the most significant bits.
> -- 
> 2.50.1
> 

  reply	other threads:[~2025-08-08 14:39 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-07 17:01 [PATCH 0/9 v2] drm/i915: Remove legacy struct_mutex usage Luiz Otavio Mello
2025-08-07 17:02 ` [PATCH 1/9 v2] drm/i915: Move struct_mutex to drm_i915_private Luiz Otavio Mello
2025-08-08 14:37   ` Rodrigo Vivi
2025-08-07 17:02 ` [PATCH 2/9 v2] drm/i915: Remove struct_mutex in i915_irq.c Luiz Otavio Mello
2025-08-08 14:37   ` Rodrigo Vivi
2025-08-07 17:02 ` [PATCH 3/9 v2] drm/i915: Change mutex initialization in intel_guc_log Luiz Otavio Mello
2025-08-08 14:38   ` Rodrigo Vivi
2025-08-07 17:02 ` [PATCH 4/9 v2] drm/i915: Replace struct_mutex in intel_guc_log.c Luiz Otavio Mello
2025-08-08 14:38   ` Rodrigo Vivi
2025-08-07 17:02 ` [PATCH 5/9 v2] drm/i915/gem: Clean-up outdated struct_mutex comments Luiz Otavio Mello
2025-08-08 14:38   ` Rodrigo Vivi [this message]
2025-08-07 17:02 ` [PATCH 6/9 v2] drm/i915/display: Remove " Luiz Otavio Mello
2025-08-08 14:39   ` Rodrigo Vivi
2025-08-07 17:02 ` [PATCH 7/9 v2] drm/i915: Clean-up " Luiz Otavio Mello
2025-08-08 14:39   ` Rodrigo Vivi
2025-08-07 17:02 ` [PATCH 8/9 v2] drm/i915: Remove unused struct_mutex from drm_i915_private Luiz Otavio Mello
2025-08-08 14:40   ` Rodrigo Vivi
2025-08-07 17:02 ` [PATCH 9/9 v2] drm/i915: Remove todo and comments about struct_mutex Luiz Otavio Mello
2025-08-08 14:42   ` Rodrigo Vivi
2025-08-07 17:18 ` ✗ LGCI.VerificationFailed: failure for drm/i915: Remove legacy struct_mutex usage (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aJYL92WDRSixoPMk@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=airlied@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=luiz.mello@estudante.ufscar.br \
    --cc=mairacanal@riseup.net \
    --cc=simona@ffwll.ch \
    --cc=tursulin@ursulin.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).