From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>,
Satyanarayana K V P <satyanarayana.k.v.p@intel.com>,
intel-xe@lists.freedesktop.org
Cc: Matthew Brost <matthew.brost@intel.com>,
Matthew Auld <matthew.auld@intel.com>
Subject: Re: [PATCH v3 2/3] drm/xe/vf: Fix fs_reclaim warning with CCS save/restore BB allocation
Date: Tue, 10 Feb 2026 15:01:56 +0100 [thread overview]
Message-ID: <d7a1ab7d1352c60253f8b840db391d23b955d5fa.camel@linux.intel.com> (raw)
In-Reply-To: <dfb761f4-eb4a-4a86-9856-b5da35f16d94@intel.com>
On Tue, 2026-02-10 at 14:02 +0100, Michal Wajdeczko wrote:
>
>
> On 2/10/2026 11:59 AM, Satyanarayana K V P wrote:
> > CCS save/restore batch buffers are attached during BO allocation
> > and
> > detached during BO teardown. The shrinker triggers xe_bo_move(),
> > which is
> > used for both allocation and deletion paths.
> >
> > When BO allocation and shrinking occur concurrently, a circular
> > locking
> > dependency involving fs_reclaim and swap_guard can occur, leading
> > to a
> > deadlock such as:
> >
> > ======================================================
> > WARNING: possible circular locking dependency detected
> > ------------------------------------------------------
> >
> > CPU0 CPU1
> > ---- ----
> > lock(fs_reclaim);
> > lock(&sa_manager->swap_guard);
> > lock(fs_reclaim);
> > lock(&sa_manager->swap_guard);
> >
> > *** DEADLOCK ***
> > =====================================================
> >
> > To avoid this, the BB pointer and SA are allocated using
> > xe_bb_alloc()
> > before taking lock and SA is initialized using xe_bb_init()
> > preventing
> > reclaim from being invoked in this context.
> >
> > Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy
> > commands with BO")
> > Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >
> > ---
> > V2 -> V3:
> > - Created new functions xe_sa_bo_alloc(), xe_sa_bo_release() and
> > xe_sa_bo_init(). (Thomas)
> > - Created new functions xe_bb_alloc(), xe_bb_release and
> > xe_bb_init(). (Thomas)
> > - Updated guard() to scoped_guard() in xe_migrate_ccs_rw_copy().
> > (Thomas)
> >
> > V1 -> V2:
> > - Used drm_suballoc_alloc() and drm_suballoc_init() for BB
> > allocation
> > (Thomas).
> > ---
> > drivers/gpu/drm/xe/xe_bb.c | 49 +++++++++++------
> > drivers/gpu/drm/xe/xe_bb.h | 7 ++-
> > drivers/gpu/drm/xe/xe_migrate.c | 96 ++++++++++++++++++-----------
> > ----
> > drivers/gpu/drm/xe/xe_sa.c | 40 ++++++++++++++
> > drivers/gpu/drm/xe/xe_sa.h | 3 ++
> > 5 files changed, 135 insertions(+), 60 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_bb.c
> > b/drivers/gpu/drm/xe/xe_bb.c
> > index 8b678297aaa2..631ae564e719 100644
> > --- a/drivers/gpu/drm/xe/xe_bb.c
> > +++ b/drivers/gpu/drm/xe/xe_bb.c
> > @@ -59,16 +59,43 @@ struct xe_bb *xe_bb_new(struct xe_gt *gt, u32
> > dwords, bool usm)
> > return ERR_PTR(err);
> > }
> >
> > -struct xe_bb *xe_bb_ccs_new(struct xe_gt *gt, u32 dwords,
> > - enum xe_sriov_vf_ccs_rw_ctxs ctx_id)
>
> shouldn't we add kernel-doc for all new/updated public functions?
+1.
>
> > +struct xe_bb *xe_bb_alloc(struct xe_gt *gt)
> > {
> > struct xe_bb *bb = kmalloc(sizeof(*bb), GFP_KERNEL);
> > struct xe_device *xe = gt_to_xe(gt);
> > - struct xe_sa_manager *bb_pool;
> > int err;
> >
> > if (!bb)
> > return ERR_PTR(-ENOMEM);
> > +
> > + bb->bo = xe_sa_bo_alloc(GFP_KERNEL);
> > + if (IS_ERR(bb->bo)) {
> > + drm_err(&xe->drm, "Sub-allocator memory allocation
> > failed with %ld\n",
>
> nit: there is xe_err(xe, ...)
>
> nit: we try to print errors using %pe, which here is even more
> appropriate
>
> nit: maybe "Failed to allocate SA object for BO (%pe)\n" ?
>
> nit: or maybe any error message should be in xe_sa_bo_alloc()
Typically we don't print errors other than in a debug mode, like
drm_dbg() (I don't think we have an xe_dbg, though). In particular with
memory allocation the core will spew out error messages anyway so I
think it's safe to remove that and just forward it to upper layers.
The rest with Michal's review comments addressed LGTM.
Thanks,
Thomas
>
> > + PTR_ERR(bb->bo));
> > + err = PTR_ERR(bb->bo);
> > + goto err;
> > + }
> > +
> > + return bb;
> > +
> > +err:
> > + kfree(bb);
> > + return ERR_PTR(err);
> > +}
> > +
>
> kernel-doc
>
> > +void xe_bb_release(struct xe_bb *bb)
> > +{
> > + if (bb->bo)
>
> do we need this?
> in xe_bb_alloc() we guarantee that bb will either have valid .bo or
> no bb at all
>
> > + xe_sa_bo_release(bb->bo);
> > +
> > + kfree(bb);
> > +}
> > +
>
> kernel-doc
>
> > +int xe_bb_init(struct xe_gt *gt, struct xe_bb *bb,
>
> as this is xe_bb function, it should take xe_bb as first param
>
> > + struct xe_sa_manager *bb_pool, u32 dwords)
> > +{
> > + int err;
> > +
> > /*
> > * We need to allocate space for the requested number of
> > dwords &
> > * one additional MI_BATCH_BUFFER_END dword. Since the
> > whole SA
> > @@ -76,22 +103,14 @@ struct xe_bb *xe_bb_ccs_new(struct xe_gt *gt,
> > u32 dwords,
> > * is not over written when the last chunk of SA is
> > allocated for BB.
> > * So, this extra DW acts as a guard here.
> > */
> > -
> > - bb_pool = xe-
> > >sriov.vf.ccs.contexts[ctx_id].mem.ccs_bb_pool;
> > - bb->bo = xe_sa_bo_new(bb_pool, 4 * (dwords + 1));
> > -
> > - if (IS_ERR(bb->bo)) {
> > - err = PTR_ERR(bb->bo);
> > - goto err;
> > - }
> > + err = xe_sa_bo_init(bb_pool, bb->bo, 4 * (dwords + 1));
> > + if (err)
> > + return err;
> >
> > bb->cs = xe_sa_bo_cpu_addr(bb->bo);
> > bb->len = 0;
> >
> > - return bb;
> > -err:
> > - kfree(bb);
> > - return ERR_PTR(err);
> > + return 0;
> > }
> >
> > static struct xe_sched_job *
> > diff --git a/drivers/gpu/drm/xe/xe_bb.h
> > b/drivers/gpu/drm/xe/xe_bb.h
> > index 2a8adc9a6dee..3eb80925bfd1 100644
> > --- a/drivers/gpu/drm/xe/xe_bb.h
> > +++ b/drivers/gpu/drm/xe/xe_bb.h
> > @@ -13,11 +13,14 @@ struct dma_fence;
> > struct xe_gt;
> > struct xe_exec_queue;
> > struct xe_sched_job;
> > +struct xe_sa_manager;
>
> wrong order
>
> > enum xe_sriov_vf_ccs_rw_ctxs;
> >
> > struct xe_bb *xe_bb_new(struct xe_gt *gt, u32 dwords, bool usm);
> > -struct xe_bb *xe_bb_ccs_new(struct xe_gt *gt, u32 dwords,
> > - enum xe_sriov_vf_ccs_rw_ctxs ctx_id);
> > +struct xe_bb *xe_bb_alloc(struct xe_gt *gt);
> > +void xe_bb_release(struct xe_bb *bb);
> > +int xe_bb_init(struct xe_gt *gt, struct xe_bb *bb,
> > + struct xe_sa_manager *bb_pool, u32 dwords);
> > struct xe_sched_job *xe_bb_create_job(struct xe_exec_queue *q,
> > struct xe_bb *bb);
> > struct xe_sched_job *xe_bb_create_migration_job(struct
> > xe_exec_queue *q,
> > diff --git a/drivers/gpu/drm/xe/xe_migrate.c
> > b/drivers/gpu/drm/xe/xe_migrate.c
> > index 078a9bc2821d..c858eaa70e3e 100644
> > --- a/drivers/gpu/drm/xe/xe_migrate.c
> > +++ b/drivers/gpu/drm/xe/xe_migrate.c
> > @@ -1148,65 +1148,75 @@ int xe_migrate_ccs_rw_copy(struct xe_tile
> > *tile, struct xe_exec_queue *q,
> > size -= src_L0;
> > }
> >
> > - bb_pool = ctx->mem.ccs_bb_pool;
> > - guard(mutex) (xe_sa_bo_swap_guard(bb_pool));
> > - xe_sa_bo_swap_shadow(bb_pool);
> > -
> > - bb = xe_bb_ccs_new(gt, batch_size, read_write);
> > + bb = xe_bb_alloc(gt);
> > if (IS_ERR(bb)) {
> > - drm_err(&xe->drm, "BB allocation failed.\n");
> > err = PTR_ERR(bb);
> > return err;
>
> nit: this could be
>
> return PTR_ERR(bb);
>
> > }
> >
> > - batch_size_allocated = batch_size;
> > - size = xe_bo_size(src_bo);
> > - batch_size = 0;
> > + bb_pool = ctx->mem.ccs_bb_pool;
> > + scoped_guard(mutex, xe_sa_bo_swap_guard(bb_pool)) {
> > + xe_sa_bo_swap_shadow(bb_pool);
> > +
> > + err = xe_bb_init(gt, bb, bb_pool, batch_size);
> > + if (err) {
> > + drm_err(&xe->drm, "BB allocation
> > failed.\n");
>
> nit: there is xe_err() but since there is a gt maybe it should be
> xe_gt_err() ?
>
> nit: or maybe move that message to xe_bb_init() ?
>
> > + xe_bb_release(bb);
> > + return err;
> > + }
> >
> > - /*
> > - * Emit PTE and copy commands here.
> > - * The CCS copy command can only support limited size. If
> > the size to be
> > - * copied is more than the limit, divide copy into chunks.
> > So, calculate
> > - * sizes here again before copy command is emitted.
> > - */
> > - while (size) {
> > - batch_size += 10; /* Flush + ggtt addr + 2 NOP */
> > - u32 flush_flags = 0;
> > - u64 ccs_ofs, ccs_size;
> > - u32 ccs_pt;
> > + batch_size_allocated = batch_size;
> > + size = xe_bo_size(src_bo);
> > + batch_size = 0;
> >
> > - u32 avail_pts = max_mem_transfer_per_pass(xe) /
> > LEVEL0_PAGE_TABLE_ENCODE_SIZE;
> > + /*
> > + * Emit PTE and copy commands here.
> > + * The CCS copy command can only support limited
> > size. If the size to be
> > + * copied is more than the limit, divide copy into
> > chunks. So, calculate
> > + * sizes here again before copy command is
> > emitted.
> > + */
> >
> > - src_L0 = xe_migrate_res_sizes(m, &src_it);
> > + while (size) {
> > + batch_size += 10; /* Flush + ggtt addr + 2
> > NOP */
> > + u32 flush_flags = 0;
> > + u64 ccs_ofs, ccs_size;
> > + u32 ccs_pt;
> >
> > - batch_size += pte_update_size(m, false, src,
> > &src_it, &src_L0,
> > - &src_L0_ofs,
> > &src_L0_pt, 0, 0,
> > - avail_pts);
> > + u32 avail_pts =
> > max_mem_transfer_per_pass(xe) /
> > + LEVEL0_PAGE_TABLE_ENCODE_S
> > IZE;
> >
> > - ccs_size = xe_device_ccs_bytes(xe, src_L0);
> > - batch_size += pte_update_size(m, 0, NULL, &ccs_it,
> > &ccs_size, &ccs_ofs,
> > - &ccs_pt, 0,
> > avail_pts, avail_pts);
> > - xe_assert(xe, IS_ALIGNED(ccs_it.start,
> > PAGE_SIZE));
> > - batch_size += EMIT_COPY_CCS_DW;
> > + src_L0 = xe_migrate_res_sizes(m, &src_it);
> > +
> > + batch_size += pte_update_size(m, false,
> > src, &src_it, &src_L0,
> > + &src_L0_ofs,
> > &src_L0_pt, 0, 0,
> > + avail_pts);
> > +
> > + ccs_size = xe_device_ccs_bytes(xe,
> > src_L0);
> > + batch_size += pte_update_size(m, 0, NULL,
> > &ccs_it, &ccs_size, &ccs_ofs,
> > + &ccs_pt, 0,
> > avail_pts, avail_pts);
> > + xe_assert(xe, IS_ALIGNED(ccs_it.start,
> > PAGE_SIZE));
> > + batch_size += EMIT_COPY_CCS_DW;
> >
> > - emit_pte(m, bb, src_L0_pt, false, true, &src_it,
> > src_L0, src);
> > + emit_pte(m, bb, src_L0_pt, false, true,
> > &src_it, src_L0, src);
> >
> > - emit_pte(m, bb, ccs_pt, false, false, &ccs_it,
> > ccs_size, src);
> > + emit_pte(m, bb, ccs_pt, false, false,
> > &ccs_it, ccs_size, src);
> >
> > - bb->len = emit_flush_invalidate(bb->cs, bb->len,
> > flush_flags);
> > - flush_flags = xe_migrate_ccs_copy(m, bb,
> > src_L0_ofs, src_is_pltt,
> > - src_L0_ofs,
> > dst_is_pltt,
> > - src_L0, ccs_ofs,
> > true);
> > - bb->len = emit_flush_invalidate(bb->cs, bb->len,
> > flush_flags);
> > + bb->len = emit_flush_invalidate(bb->cs,
> > bb->len, flush_flags);
> > + flush_flags = xe_migrate_ccs_copy(m, bb,
> > src_L0_ofs, src_is_pltt,
> > +
> > src_L0_ofs, dst_is_pltt,
> > + src_L0,
> > ccs_ofs, true);
> > + bb->len = emit_flush_invalidate(bb->cs,
> > bb->len, flush_flags);
> >
> > - size -= src_L0;
> > - }
> > + size -= src_L0;
> > + }
> >
> > - xe_assert(xe, (batch_size_allocated == bb->len));
> > - src_bo->bb_ccs[read_write] = bb;
> > + xe_assert(xe, (batch_size_allocated == bb->len));
> > + src_bo->bb_ccs[read_write] = bb;
> > +
> > + xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> > + xe_sa_bo_sync_shadow(bb->bo);
> > + }
> >
> > - xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> > - xe_sa_bo_sync_shadow(bb->bo);
> > return 0;
> > }
> >
> > diff --git a/drivers/gpu/drm/xe/xe_sa.c
> > b/drivers/gpu/drm/xe/xe_sa.c
> > index b738102575d4..59d0187b3e82 100644
> > --- a/drivers/gpu/drm/xe/xe_sa.c
> > +++ b/drivers/gpu/drm/xe/xe_sa.c
> > @@ -175,6 +175,46 @@ struct drm_suballoc *__xe_sa_bo_new(struct
> > xe_sa_manager *sa_manager, u32 size,
> > return drm_suballoc_new(&sa_manager->base, size, gfp,
> > true, 0);
> > }
> >
> > +/**
> > + * xe_sa_bo_alloc - Allocate uninitialized suballoc object.
> > + * @gfp: gfp flags used for memory allocation.
> > + *
> > + * Allocate memory for an uninitialized suballoc object. Intended
> > usage is
> > + * allocate memory for suballoc object outside of a reclaim
> > tainted context
> > + * and then be initialized at a later time in a reclaim tainted
> > context.
> > + *
> > + * Return: a new uninitialized suballoc object, or an ERR_PTR(-
> > ENOMEM).
> > + */
> > +
>
> extra \n
>
> > +struct drm_suballoc *xe_sa_bo_alloc(gfp_t gfp)
> > +{
> > + return drm_suballoc_alloc(gfp);
> > +}
> > +
> > +/**
> > + * xe_sa_bo_release - Release memory for suballocation.
>
> nit: add () to function name
>
> * xe_sa_bo_release() - ...
>
>
> > + * @sa: The struct drm_suballoc.
> > + */
> > +void xe_sa_bo_release(struct drm_suballoc *sa)
> > +{
> > + drm_suballoc_release(sa);
> > +}
> > +
> > +/**
> > + * xe_sa_bo_init - Initialize a suballocation.
>
> ditto
>
> > + * @sa_manager: pointer to the sa_manager
> > + * @sa: The struct drm_suballoc.
> > + * @size: number of bytes we want to suballocate.
> > + *
> > + * Try to make a suballocation on a pre-allocated suballoc object
> > of size @size.
> > + *
> > + * Return: zero on success, errno on failure.
> > + */
> > +int xe_sa_bo_init(struct xe_sa_manager *sa_manager, struct
> > drm_suballoc *sa, u32 size)
>
> why size is u32 ? drm_suballoc_init() takes size_t
>
> > +{
> > + return drm_suballoc_init(&sa_manager->base, sa, size,
> > true, 0);
> > +}
> > +
> > /**
> > * xe_sa_bo_flush_write() - Copy the data from the sub-allocation
> > to the GPU memory.
> > * @sa_bo: the &drm_suballoc to flush
> > diff --git a/drivers/gpu/drm/xe/xe_sa.h
> > b/drivers/gpu/drm/xe/xe_sa.h
> > index 05e9a4e00e78..19d4b698a7d7 100644
> > --- a/drivers/gpu/drm/xe/xe_sa.h
> > +++ b/drivers/gpu/drm/xe/xe_sa.h
> > @@ -38,6 +38,9 @@ static inline struct drm_suballoc
> > *xe_sa_bo_new(struct xe_sa_manager *sa_manager
> > return __xe_sa_bo_new(sa_manager, size, GFP_KERNEL);
> > }
> >
> > +struct drm_suballoc *xe_sa_bo_alloc(gfp_t gfp);
> > +void xe_sa_bo_release(struct drm_suballoc *sa);
> > +int xe_sa_bo_init(struct xe_sa_manager *sa_manager, struct
> > drm_suballoc *sa, u32 size);
> > void xe_sa_bo_flush_write(struct drm_suballoc *sa_bo);
> > void xe_sa_bo_sync_read(struct drm_suballoc *sa_bo);
> > void xe_sa_bo_free(struct drm_suballoc *sa_bo, struct dma_fence
> > *fence);
next prev parent reply other threads:[~2026-02-10 14:02 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-10 10:59 [PATCH v3 0/3] Fix fs_reclaim deadlock caused by CCS save/restore Satyanarayana K V P
2026-02-10 10:59 ` [PATCH v3 1/3] drm/sa: Split drm_suballoc_new() into SA alloc and init helpers Satyanarayana K V P
2026-02-10 12:09 ` Thomas Hellström
2026-02-11 13:32 ` Christian König
2026-02-11 0:20 ` Matthew Brost
2026-02-10 10:59 ` [PATCH v3 2/3] drm/xe/vf: Fix fs_reclaim warning with CCS save/restore BB allocation Satyanarayana K V P
2026-02-10 13:02 ` Michal Wajdeczko
2026-02-10 14:01 ` Thomas Hellström [this message]
2026-02-10 10:59 ` [PATCH v3 3/3] drm/xe/sa: Add lockdep annotations for SA manager swap_guard Satyanarayana K V P
2026-02-10 11:06 ` ✓ CI.KUnit: success for Fix fs_reclaim deadlock caused by CCS save/restore (rev3) Patchwork
2026-02-10 11:59 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-10 14:29 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d7a1ab7d1352c60253f8b840db391d23b955d5fa.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.wajdeczko@intel.com \
--cc=satyanarayana.k.v.p@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox