Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: intel-xe@lists.freedesktop.org
Subject: Re: [Intel-xe] [PATCH v13 10/10] drm/xe: add lockdep annotation for xe_device_mem_access_get()
Date: Fri, 14 Jul 2023 11:40:37 -0400	[thread overview]
Message-ID: <ZLFsdTPheHDgkUgF@intel.com> (raw)
In-Reply-To: <20230713132244.459605-22-matthew.auld@intel.com>

On Thu, Jul 13, 2023 at 02:22:55PM +0100, Matthew Auld wrote:
> The atomics here might hide potential issues, also rpm core is not
> holding any lock when calling our rpm resume callback, so add a dummy lock
> with the idea that xe_pm_runtime_resume() is eventually going to be
> called when we are holding it. This only needs to happen once and then
> lockdep can validate all callers and their locks.
> 
> v2: (Thomas Hellström)
>  - Prefer static lockdep_map instead of full blown mutex.
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Acked-by: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_device.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 70f89869b8a6..7eb5d81b330c 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -35,6 +35,12 @@
>  #include "xe_vm_madvise.h"
>  #include "xe_wait_user_fence.h"
>  
> +#ifdef CONFIG_LOCKDEP
> +static struct lockdep_map xe_device_mem_access_lockdep_map = {
> +	.name = "xe_device_mem_access_lockdep_map"
> +};
> +#endif
> +
>  static int xe_file_open(struct drm_device *dev, struct drm_file *file)
>  {
>  	struct xe_file *xef;
> @@ -458,9 +464,27 @@ void xe_device_mem_access_get(struct xe_device *xe)
>  	if (xe_pm_read_callback_task(xe) == current)
>  		return;
>  
> +	/*
> +	 * Since the resume here is synchronous it can be quite easy to deadlock
> +	 * if we are not careful. Also in practice it might be quite timing
> +	 * sensitive to ever see the 0 -> 1 transition with the callers locks
> +	 * held, so deadlocks might exist but are hard for lockdep to ever see.
> +	 * With this in mind, help lockdep learn about the potentially scary
> +	 * stuff that can happen inside the runtime_resume callback by acquiring
> +	 * a dummy lock (it doesn't protect anything and gets compiled out on
> +	 * non-debug builds).  Lockdep then only needs to see the
> +	 * mem_access_lockdep_map -> runtime_resume callback once, and then can
> +	 * hopefully validate all the (callers_locks) -> mem_access_lockdep_map.
> +	 * For example if the (callers_locks) are ever grabbed in the
> +	 * runtime_resume callback, lockdep should give us a nice splat.
> +	 */
> +	lock_map_acquire(&xe_device_mem_access_lockdep_map);
> +
>  	xe_pm_runtime_get(xe);
>  	ref = atomic_inc_return(&xe->mem_access.ref);
>  	XE_WARN_ON(atomic_read(&xe->mem_access.ref) == S32_MAX);
> +
> +	lock_map_release(&xe_device_mem_access_lockdep_map);
>  }
>  
>  void xe_device_mem_access_put(struct xe_device *xe)
> -- 
> 2.41.0
> 

  reply	other threads:[~2023-07-14 15:40 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-13 13:22 [Intel-xe] [PATCH v13 00/10] xe_device_mem_access fixes and related bits Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 01/10] drm/xe: fix xe_device_mem_access_get() races Matthew Auld
2023-07-14 13:04   ` Gupta, Anshuman
2023-07-14 15:34     ` Rodrigo Vivi
2023-07-17 10:53       ` Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 02/10] drm/xe/vm: tidy up xe_runtime_pm usage Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 03/10] drm/xe/debugfs: grab mem_access around forcewake Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 04/10] drm/xe/guc_pc: add missing mem_access for freq_rpe_show Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 05/10] drm/xe/mmio: grab mem_access in xe_mmio_ioctl Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 06/10] drm/xe: ensure correct access_put ordering Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 07/10] drm/xe: drop xe_device_mem_access_get() from guc_ct_send Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 08/10] drm/xe/ggtt: prime ggtt->lock against FS_RECLAIM Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 09/10] drm/xe: drop xe_device_mem_access_get() from invalidation_vma Matthew Auld
2023-07-13 13:22 ` [Intel-xe] [PATCH v13 10/10] drm/xe: add lockdep annotation for xe_device_mem_access_get() Matthew Auld
2023-07-14 15:40   ` Rodrigo Vivi [this message]
2023-07-13 15:29 ` [Intel-xe] ✓ CI.Patch_applied: success for xe_device_mem_access fixes and related bits (rev3) Patchwork
2023-07-13 15:30 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-07-13 15:31 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-07-13 15:35 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-07-13 15:35 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-07-13 15:36 ` [Intel-xe] ✓ CI.checksparse: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZLFsdTPheHDgkUgF@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox