From: Arvind Yadav <arvind.yadav@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com,
thomas.hellstrom@linux.intel.com, pallavi.mishra@intel.com
Subject: [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers
Date: Wed, 11 Feb 2026 20:56:37 +0530 [thread overview]
Message-ID: <20260211152644.1661165-9-arvind.yadav@intel.com> (raw)
In-Reply-To: <20260211152644.1661165-1-arvind.yadav@intel.com>
Encapsulate TTM purgeable flag updates and shrinker page accounting
into helper functions. This prevents desynchronization between the
TTM tt->purgeable flag and the shrinker's page bucket counters.
Without these helpers, direct manipulation of xe_ttm_tt->purgeable
risks forgetting to update the corresponding shrinker counters,
leading to incorrect memory pressure calculations.
Add xe_bo_set_purgeable_shrinker() and xe_bo_clear_purgeable_shrinker()
which atomically update both the TTM flag and transfer pages between
the shrinkable and purgeable buckets.
Handle ghost BOs and zero-refcount xe BOs separately in xe_bo_shrink().
Ghost BOs from ttm_bo_pipeline_gutting() still hold reclaimable pages,
so attempt the shrink to let the shrinker block until the fence signals.
For xe BOs whose refcount has dropped to zero, return -EBUSY since the
destroy path will handle cleanup.
v4:
- @madv_purgeable atomic_t → u32 change across all relevant
patches (Matt)
v5:
- Update purgeable BO state to PURGED after a successful shrinker
purge for DONTNEED BOs.
- Split ghost BO and zero-refcount handling in xe_bo_shrink() (Thomas)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 69 +++++++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_bo.h | 2 +
drivers/gpu/drm/xe/xe_vm_madvise.c | 8 +++-
3 files changed, 76 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 7ee85c8eadde..9484105708f7 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -863,6 +863,66 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
bo->madv_purgeable = new_state;
}
+/**
+ * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update shrinker
+ * @bo: Buffer object
+ *
+ * Transfers pages from shrinkable to purgeable bucket. Shrinker can now
+ * discard pages immediately without swapping. Caller holds BO lock.
+ */
+void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
+{
+ struct ttm_buffer_object *ttm_bo = &bo->ttm;
+ struct ttm_tt *tt = ttm_bo->ttm;
+ struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+ struct xe_ttm_tt *xe_tt;
+
+ xe_bo_assert_held(bo);
+
+ if (!tt || !ttm_tt_is_populated(tt))
+ return;
+
+ xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+ if (!xe_tt->purgeable) {
+ xe_tt->purgeable = true;
+ /* Transfer pages from shrinkable to purgeable count */
+ xe_shrinker_mod_pages(xe->mem.shrinker,
+ -(long)tt->num_pages,
+ tt->num_pages);
+ }
+}
+
+/**
+ * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and update shrinker
+ * @bo: Buffer object
+ *
+ * Transfers pages from purgeable to shrinkable bucket. Shrinker must now
+ * swap pages instead of discarding. Caller holds BO lock.
+ */
+void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
+{
+ struct ttm_buffer_object *ttm_bo = &bo->ttm;
+ struct ttm_tt *tt = ttm_bo->ttm;
+ struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+ struct xe_ttm_tt *xe_tt;
+
+ xe_bo_assert_held(bo);
+
+ if (!tt || !ttm_tt_is_populated(tt))
+ return;
+
+ xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+ if (xe_tt->purgeable) {
+ xe_tt->purgeable = false;
+ /* Transfer pages from purgeable to shrinkable count */
+ xe_shrinker_mod_pages(xe->mem.shrinker,
+ tt->num_pages,
+ -(long)tt->num_pages);
+ }
+}
+
/**
* xe_ttm_bo_purge() - Purge buffer object backing store
* @ttm_bo: The TTM buffer object to purge
@@ -1234,14 +1294,21 @@ long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
if (!xe_bo_eviction_valuable(bo, &place))
return -EBUSY;
- if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
+ /* Ghost BOs still hold reclaimable pages, try to shrink them. */
+ if (!xe_bo_is_xe_bo(bo))
return xe_bo_shrink_purge(ctx, bo, scanned);
+ if (!xe_bo_get_unless_zero(xe_bo))
+ return -EBUSY;
+
if (xe_tt->purgeable) {
if (bo->resource->mem_type != XE_PL_SYSTEM)
lret = xe_bo_move_notify(xe_bo, ctx);
if (!lret)
lret = xe_bo_shrink_purge(ctx, bo, scanned);
+ if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
+ xe_bo_set_purgeable_state(xe_bo,
+ XE_MADV_PURGEABLE_PURGED);
goto out_unref;
}
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 0d9f25b51eb2..46d1fff10e4f 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -272,6 +272,8 @@ static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
}
void xe_bo_set_purgeable_state(struct xe_bo *bo, enum xe_madv_purgeable_state new_state);
+void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
+void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 8d55ea78b6d1..235fff2b654e 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -289,12 +289,16 @@ void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
if (xe_bo_all_vmas_dontneed(bo)) {
/* All VMAs are DONTNEED - mark BO purgeable */
- if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED)
+ if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED) {
xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+ xe_bo_set_purgeable_shrinker(bo);
+ }
} else {
/* At least one VMA is WILLNEED - BO must not be purgeable */
- if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED)
+ if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED) {
xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+ xe_bo_clear_purgeable_shrinker(bo);
+ }
}
}
--
2.43.0
next prev parent reply other threads:[~2026-02-11 15:27 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-02-11 15:26 ` [PATCH v5 1/9] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-02-24 10:50 ` Thomas Hellström
2026-02-26 17:58 ` Souza, Jose
2026-02-27 9:32 ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 2/9] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-02-11 16:00 ` Matthew Brost
2026-02-11 15:26 ` [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-02-24 12:21 ` Thomas Hellström
2026-02-24 14:56 ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 4/9] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
2026-02-11 15:26 ` [PATCH v5 5/9] drm/xe/vm: Prevent binding of " Arvind Yadav
2026-02-11 16:17 ` Matthew Brost
2026-02-11 15:26 ` [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-02-24 12:48 ` Thomas Hellström
2026-02-24 15:07 ` Yadav, Arvind
2026-02-24 16:36 ` Matthew Brost
2026-02-25 5:35 ` Yadav, Arvind
2026-02-25 8:21 ` Thomas Hellström
2026-02-25 9:04 ` Matthew Brost
2026-02-25 9:18 ` Thomas Hellström
2026-02-25 9:40 ` Yadav, Arvind
2026-02-25 18:32 ` Matthew Brost
2026-02-11 15:26 ` [PATCH v5 7/9] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-02-24 14:15 ` Thomas Hellström
2026-02-11 15:26 ` Arvind Yadav [this message]
2026-02-24 14:21 ` [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers Thomas Hellström
2026-02-24 15:09 ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 9/9] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
2026-02-11 15:40 ` Matthew Brost
2026-02-11 15:46 ` [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Matthew Brost
2026-02-25 10:10 ` Yadav, Arvind
2026-02-11 16:21 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev6) Patchwork
2026-02-11 16:22 ` ✓ CI.KUnit: success " Patchwork
2026-02-11 17:11 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-02-13 1:15 ` ✗ Xe.CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260211152644.1661165-9-arvind.yadav@intel.com \
--to=arvind.yadav@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=pallavi.mishra@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox