Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: intel-xe@lists.freedesktop.org,
	"Michal Wajdeczko" <michal.wajdeczko@intel.com>,
	"Matthew Auld" <matthew.auld@intel.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Subject: Re: [PATCH v2 3/3] drm/xe/vf: Fix fs_reclaim warning with CCS save/restore BB allocation
Date: Wed, 4 Feb 2026 11:18:22 -0800	[thread overview]
Message-ID: <aYObfs0zLkmIZYHn@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20260204164642.3509298-8-satyanarayana.k.v.p@intel.com>

On Wed, Feb 04, 2026 at 04:46:46PM +0000, Satyanarayana K V P wrote:
> CCS save/restore batch buffers are attached during BO allocation and
> detached during BO teardown. The shrinker triggers xe_bo_move(), which is
> used for both allocation and deletion paths.
> 
> When BO allocation and shrinking occur concurrently, a circular locking
> dependency involving fs_reclaim and swap_guard can occur, leading to a
> deadlock such as:
> 
> ======================================================
> WARNING: possible circular locking dependency detected
> ------------------------------------------------------
> 
>       CPU0                    CPU1
>       ----                    ----
>  lock(fs_reclaim);
>                               lock(&sa_manager->swap_guard);
>                               lock(fs_reclaim);
>  lock(&sa_manager->swap_guard);
> 
>  *** DEADLOCK ***
> =====================================================
> 
> To avoid this, the BB pointer allocation is separated from xe_bb_ccs_new(),
> used drm_suballoc_alloc() for SA allocation and drm_suballoc_init() for BB
> allocation preventing reclaim from being invoked in this context.
> 
> Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy commands with BO")
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> 
> ---
> V1 -> V2:
> - Used drm_suballoc_alloc() and drm_suballoc_init() for BB allocation
> (Thomas).
> ---
>  drivers/gpu/drm/xe/xe_bb.c      | 22 ++++++++--------------
>  drivers/gpu/drm/xe/xe_bb.h      |  4 ++--
>  drivers/gpu/drm/xe/xe_migrate.c | 29 +++++++++++++++++++++++++----
>  drivers/gpu/drm/xe/xe_sa.c      | 24 ++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_sa.h      | 19 +++++++++++++++++++
>  5 files changed, 78 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bb.c b/drivers/gpu/drm/xe/xe_bb.c
> index 8b678297aaa2..9a04d0e814d3 100644
> --- a/drivers/gpu/drm/xe/xe_bb.c
> +++ b/drivers/gpu/drm/xe/xe_bb.c
> @@ -59,16 +59,14 @@ struct xe_bb *xe_bb_new(struct xe_gt *gt, u32 dwords, bool usm)
>  	return ERR_PTR(err);
>  }
>  
> -struct xe_bb *xe_bb_ccs_new(struct xe_gt *gt, u32 dwords,
> -			    enum xe_sriov_vf_ccs_rw_ctxs ctx_id)
> +int xe_bb_ccs_new(struct xe_gt *gt, struct xe_bb *bb, struct drm_suballoc *sa,
> +		  u32 dwords, enum xe_sriov_vf_ccs_rw_ctxs ctx_id)
>  {
> -	struct xe_bb *bb = kmalloc(sizeof(*bb), GFP_KERNEL);
>  	struct xe_device *xe = gt_to_xe(gt);
>  	struct xe_sa_manager *bb_pool;
> +	int timeout = HZ;

I think the timeout can less as suballocations should always succeed if
it is correctly sized. So maybe 1 or zero (unsure if zero is valid value
though).

I don't see the size of suballocator changed in this series - are you
still trying to figure that part out?

>  	int err;
>  
> -	if (!bb)
> -		return ERR_PTR(-ENOMEM);
>  	/*
>  	 * We need to allocate space for the requested number of dwords &
>  	 * one additional MI_BATCH_BUFFER_END dword. Since the whole SA
> @@ -78,20 +76,16 @@ struct xe_bb *xe_bb_ccs_new(struct xe_gt *gt, u32 dwords,
>  	 */
>  
>  	bb_pool = xe->sriov.vf.ccs.contexts[ctx_id].mem.ccs_bb_pool;
> -	bb->bo = xe_sa_bo_new(bb_pool, 4 * (dwords + 1));
> +	err = xe_sa_bo_new_init(bb_pool, sa, 4 * (dwords + 1), timeout);
>  
> -	if (IS_ERR(bb->bo)) {
> -		err = PTR_ERR(bb->bo);
> -		goto err;
> -	}
> +	if (err)
> +		return err;
>  
> +	bb->bo = sa;
>  	bb->cs = xe_sa_bo_cpu_addr(bb->bo);
>  	bb->len = 0;
>  
> -	return bb;
> -err:
> -	kfree(bb);
> -	return ERR_PTR(err);
> +	return 0;
>  }
>  
>  static struct xe_sched_job *
> diff --git a/drivers/gpu/drm/xe/xe_bb.h b/drivers/gpu/drm/xe/xe_bb.h
> index 2a8adc9a6dee..3da4652cf0b0 100644
> --- a/drivers/gpu/drm/xe/xe_bb.h
> +++ b/drivers/gpu/drm/xe/xe_bb.h
> @@ -16,8 +16,8 @@ struct xe_sched_job;
>  enum xe_sriov_vf_ccs_rw_ctxs;
>  
>  struct xe_bb *xe_bb_new(struct xe_gt *gt, u32 dwords, bool usm);
> -struct xe_bb *xe_bb_ccs_new(struct xe_gt *gt, u32 dwords,
> -			    enum xe_sriov_vf_ccs_rw_ctxs ctx_id);
> +int xe_bb_ccs_new(struct xe_gt *gt, struct xe_bb *bb, struct drm_suballoc *sa,
> +		  u32 dwords, enum xe_sriov_vf_ccs_rw_ctxs ctx_id);
>  struct xe_sched_job *xe_bb_create_job(struct xe_exec_queue *q,
>  				      struct xe_bb *bb);
>  struct xe_sched_job *xe_bb_create_migration_job(struct xe_exec_queue *q,
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index 078a9bc2821d..15d73814a08a 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -1109,6 +1109,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  	struct xe_sriov_vf_ccs_ctx *ctx;
>  	struct xe_sa_manager *bb_pool;
>  	u64 size = xe_bo_size(src_bo);
> +	struct drm_suballoc *sa;
>  	struct xe_bb *bb = NULL;
>  	u64 src_L0, src_L0_ofs;
>  	u32 src_L0_pt;
> @@ -1148,15 +1149,28 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  		size -= src_L0;
>  	}
>  
> +	bb = kmalloc(sizeof(*bb), GFP_KERNEL);
> +	if (!bb) {
> +		err = -ENOMEM;
> +		goto err_bb;
> +	}
> +
> +	sa = drm_suballoc_alloc(GFP_KERNEL);
> +	if (IS_ERR(sa)) {
> +		drm_err(&xe->drm, "Sub-allocator memory allocation failed with %ld\n",
> +			PTR_ERR(sa));
> +		err = PTR_ERR(sa);
> +		goto err_sa;
> +	}
> +
>  	bb_pool = ctx->mem.ccs_bb_pool;
>  	guard(mutex) (xe_sa_bo_swap_guard(bb_pool));
>  	xe_sa_bo_swap_shadow(bb_pool);
>  
> -	bb = xe_bb_ccs_new(gt, batch_size, read_write);
> -	if (IS_ERR(bb)) {
> +	err = xe_bb_ccs_new(gt, bb, sa, batch_size, read_write);
> +	if (err) {
>  		drm_err(&xe->drm, "BB allocation failed.\n");
> -		err = PTR_ERR(bb);
> -		return err;
> +		goto err_bb_ccs;

Hmm, it is frowned upon [1] to use goto cleanups after a guard. I believe
this case works but to adhere to the guidelines in cleanup.h I'd avoid
the goto (i.e., do the cleanup directly inside the if statement or drop
the guard usage).

[1] https://elixir.bootlin.com/linux/v6.12/source/include/linux/cleanup.h#L135

Matt

>  	}
>  
>  	batch_size_allocated = batch_size;
> @@ -1208,6 +1222,13 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  	xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>  	xe_sa_bo_sync_shadow(bb->bo);
>  	return 0;
> +
> +err_bb_ccs:
> +	drm_suballoc_release(sa);
> +err_sa:
> +	kfree(bb);
> +err_bb:
> +	return err;
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/xe/xe_sa.c b/drivers/gpu/drm/xe/xe_sa.c
> index 5efbb5a09f77..2cfe6a353299 100644
> --- a/drivers/gpu/drm/xe/xe_sa.c
> +++ b/drivers/gpu/drm/xe/xe_sa.c
> @@ -181,6 +181,30 @@ struct drm_suballoc *__xe_sa_bo_new(struct xe_sa_manager *sa_manager, u32 size,
>  	return drm_suballoc_new(&sa_manager->base, size, gfp, true, 0);
>  }
>  
> +/**
> + * __xe_sa_bo_new_init() - Make a suballocation with given SA.
> + * @sa_manager: the &xe_sa_manager
> + * @sa : Uninitalized sub allocator
> + * @size: number of bytes we want to suballocate
> + * @timeout: Timeout in jiffies waiting for allocation.
> + *
> + * Try to make a suballocation of size @size.
> + *
> + * Return: zero on success, errno on failure.
> + */
> +int __xe_sa_bo_new_init(struct xe_sa_manager *sa_manager, struct drm_suballoc *sa,
> +			u32 size, int timeout)
> +{
> +	/*
> +	 * BB to large, return -ENOBUFS indicating user should split
> +	 * array of binds into smaller chunks.
> +	 */
> +	if (size > sa_manager->base.size)
> +		return -ENOBUFS;
> +
> +	return drm_suballoc_init(&sa_manager->base, sa, size, true, 0, timeout);
> +}
> +
>  /**
>   * xe_sa_bo_flush_write() - Copy the data from the sub-allocation to the GPU memory.
>   * @sa_bo: the &drm_suballoc to flush
> diff --git a/drivers/gpu/drm/xe/xe_sa.h b/drivers/gpu/drm/xe/xe_sa.h
> index 05e9a4e00e78..9cb8722779a4 100644
> --- a/drivers/gpu/drm/xe/xe_sa.h
> +++ b/drivers/gpu/drm/xe/xe_sa.h
> @@ -18,6 +18,8 @@ struct xe_tile;
>  struct xe_sa_manager *__xe_sa_bo_manager_init(struct xe_tile *tile, u32 size,
>  					      u32 guard, u32 align, u32 flags);
>  struct drm_suballoc *__xe_sa_bo_new(struct xe_sa_manager *sa_manager, u32 size, gfp_t gfp);
> +int __xe_sa_bo_new_init(struct xe_sa_manager *sa_manager, struct drm_suballoc *sa,
> +			u32 size, int timeout);
>  
>  static inline struct xe_sa_manager *xe_sa_bo_manager_init(struct xe_tile *tile, u32 size, u32 align)
>  {
> @@ -38,6 +40,23 @@ static inline struct drm_suballoc *xe_sa_bo_new(struct xe_sa_manager *sa_manager
>  	return __xe_sa_bo_new(sa_manager, size, GFP_KERNEL);
>  }
>  
> +/**
> + * xe_sa_bo_new_init() - Make a suballocation.
> + * @sa_manager: the &xe_sa_manager
> + * @sa : Uninitialized sub-allocator
> + * @size: number of bytes we want to suballocate
> + * @timeout: Time to a wait suballocation to become available.
> + *
> + * Try to make a suballocation of size @size.
> + *
> + * Return: zero on success, errno on failure.
> + */
> +static inline int xe_sa_bo_new_init(struct xe_sa_manager *sa_manager,
> +				    struct drm_suballoc *sa, u32 size, int timeout)
> +{
> +	return __xe_sa_bo_new_init(sa_manager, sa, size, timeout);
> +}
> +
>  void xe_sa_bo_flush_write(struct drm_suballoc *sa_bo);
>  void xe_sa_bo_sync_read(struct drm_suballoc *sa_bo);
>  void xe_sa_bo_free(struct drm_suballoc *sa_bo, struct dma_fence *fence);
> -- 
> 2.43.0
> 

  reply	other threads:[~2026-02-04 19:18 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-04 16:46 [PATCH v2 0/3] Fix fs_reclaim deadlock caused by CCS save/restore Satyanarayana K V P
2026-02-04 16:46 ` [PATCH v2 1/3] drm/sa: Split drm_suballoc_new() into SA alloc and init helpers Satyanarayana K V P
2026-02-04 19:45   ` Matthew Brost
2026-02-06 12:34   ` Thomas Hellström
2026-02-06 15:27     ` Christian König
2026-02-04 16:46 ` [PATCH v2 2/3] drm/xe/sa: Add lockdep annotations for SA manager swap_guard Satyanarayana K V P
2026-02-04 19:11   ` Matthew Brost
2026-02-06 16:17   ` Thomas Hellström
2026-02-06 18:28     ` Matthew Brost
2026-02-09  9:09       ` Thomas Hellström
2026-02-09 17:07         ` Matthew Brost
2026-02-04 16:46 ` [PATCH v2 3/3] drm/xe/vf: Fix fs_reclaim warning with CCS save/restore BB allocation Satyanarayana K V P
2026-02-04 19:18   ` Matthew Brost [this message]
2026-02-06 12:49   ` Thomas Hellström
2026-02-05  2:49 ` ✓ CI.KUnit: success for Fix fs_reclaim deadlock caused by CCS save/restore (rev2) Patchwork
2026-02-05  3:24 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-05 18:34 ` ✓ Xe.CI.FULL: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aYObfs0zLkmIZYHn@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    --cc=michal.wajdeczko@intel.com \
    --cc=satyanarayana.k.v.p@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox